Tenderlove Making

Webcam photos with Ruby

Let’s do something fun! In this post we’ll take a photo using your webcam in Ruby.

NOTE: This only works on OS X

I haven’t tried making it work on other operating systems. Not that I don’t like other operating systems, I just haven’t made it work. :-)

Installing the Gem

We’ll use the av_capture gem. It wraps the AVCapture framework on OS X. To install it, just do:

$ gem install av_capture

Using the Gem

I’ll paste the code here first, then explain it:

require 'av_capture'

# Create a recording session
session = AVCapture::Session.new

# Find the first video capable device
dev = AVCapture.devices.find(&:video?)

# Output the camera's name
$stderr.puts dev.name

# Connect the camera to the recording session
session.run_with(dev) do |connection|

  # Capture an image and write it to $stdout
  $stdout.write connection.capture

First the program creates a new capture session. OS X can capture from many multimedia devices, and we hook them together through a session. Next, it grabs the first device attached to the machine that has video capability. If your machine has multiple cameras, you may want to adjust this code. After it outputs the name of the camera, we tell the session to start and use that device.

The camera can only be used while the session is open, inside the block provided to run_with. Inside the block, we ask the connection to capture an image, then write the image to $stdout.

Running the code

I’ve saved the program in a file called thing.rb. If you run the program like this:

$ ruby thing.rb | open -f -a /Applications/Preview.app

it should open Preview.app with an image captured from the camera.

Taking Photos Interactively

Let’s make this program a little more interactive:

require 'av_capture'
require 'io/console'

session = AVCapture::Session.new
dev = AVCapture.devices.find(&:video?)

session.run_with(dev) do |connection|
  loop do
    case $stdin.getch
    when 'q' then break # quit when you hit 'q'
      IO.popen("open -g -f -a /Applications/Preview.app", 'w') do |f|
        f.write connection.capture

This program will just sit there until you press a button. Press ‘q’ to quit, any other button to take a photo and display it in Preview.app. Requiring io/console lets us read one character from $stdin as soon as possible, and the call to IO.popen lets us write the data to Preview.app.

A Photo Server using DRb

It takes a little time for the camera to turn on before the program can take a photo. This causes a little lag time when we want to take a photo. In the spirit of over-engineering things, lets create a photo server using DRb. The server will keep the camera on and ready to take photos. The client will ask the server for photos.

Server code

Here is our server code:

require 'av_capture'
require 'drb'

class PhotoServer
  attr_reader :photo_request, :photo_response

  def initialize
    @photo_request  = Queue.new
    @photo_response = Queue.new
    @mutex          = Mutex.new

  def take_photo
    @mutex.synchronize do
      photo_request << "x"

server = PhotoServer.new

Thread.new do
  session = AVCapture::Session.new
  dev = AVCapture.devices.find(&:video?)

  session.run_with(dev) do |connection|
    while server.photo_request.pop
      server.photo_response.push connection.capture

URI = "druby://localhost:8787"
DRb.start_service URI, server

The PhotoServer object has a request queue and a response queue. When a client asks to take a photo by calling the take_photo method, it writes a request to the queue, then waits for a photo to be pushed on to the response queue.

The AVCapture session’s run block waits for a request to appear on the photo_request queue. When it gets a request on the queue, it takes a photo and writes the photo to the response queue.

At the bottom of the file, we connect the PhotoServer object to DRb on port 8787, and join the DRb server thread.

Client Code

Here is our client code:

require 'drb'

SERVER_URI = "druby://localhost:8787"

photoserver = DRbObject.new_with_uri SERVER_URI
print photoserver.take_photo

The client code connects to the DRb server on port 8787, requests a photo, then writes the photo to $stdout.

Running the code

In one terminal, run the server code like this:

$ ruby server.rb

Then in another terminal, run the client code like this:

$ ruby client.rb | open -f -a /Applications/Preview.app

You should have a photo show up in Preview.app. You can kill the server program by doing Ctrl-C.

Speed comparison

Just for fun, let’s compare the speed of the first program to the speed of the client program just using time. Here is the first program:

$ time ruby thing.rb > /dev/null
FaceTime HD Camera (Built-in)

real	0m3.217s
user	0m0.151s
sys	0m0.069s

Here is the client program:

$ time ruby client.rb > /dev/null

real	0m0.183s
user	0m0.070s
sys	0m0.038s

The first program takes about 3 seconds to take a photo where the second “client” program only takes 200ms or so. The reason the second program is much faster is because the server keeps the camera “hot”. Most of our time is spent getting the camera ready rather than taking photos.

Weird photos

Here are some weird photos that I made while I was writing this:

one two three

Happy Wednesday! <3<3<3<3

read more »

AdequateRecord Pro™: Like ActiveRecord, but more adequate

TL;DR: AdequateRecord is a set of patches that adds cache stuff to make ActiveRecord 2x faster

I’ve been working on speeding up Active Record, and I’d like to share what I’ve been working on! First, here is a graph:


This graph shows the number of times you can call Model.find(id) and Model.find_by_name(name) per second on each stable branch of Rails. Since it is “iterations per second”, a higher value is better. I tried running this benchmark with Rails 1.15.6, but it doesn’t work on Ruby 2.1.

Here is the benchmark code I used:

require 'active_support'
require 'active_record'

p ActiveRecord::VERSION::STRING

ActiveRecord::Base.establish_connection adapter: 'sqlite3', database: ':memory:'
ActiveRecord::Base.connection.instance_eval do
  create_table(:people) { |t| t.string :name }

class Person < ActiveRecord::Base; end

person = Person.create! name: 'Aaron'

id   = person.id
name = person.name

Benchmark.ips do |x|
  x.report('find')         { Person.find id }
  x.report('find_by_name') { Person.find_by_name name }

Now let’s talk about how I made these performance improvements.

What is AdequateRecord Pro™?

AdequateRecord Pro™ is a fork of ActiveRecord with some performance enhancements. In this post, I want to talk about how we achieved high performance in this branch. I hope you find these speed improvements to be “adequate”.

Group discounts for AdequateRecord Pro™ are available depending on the number of seats you wish to purchase.

How Does ActiveRecord Work?

ActiveRecord constructs SQL queries after doing a few transformations. Here’s an overview of the transformations:


The first transformation comes from your application code. When you do something like this in your application:


Active Record creates an instance of an ActiveRecord::Relation that contains the information that you passed to where, or order, or whatever you called. As soon as you call a method that turns this Relation instance in to an array, Active Record does a transformation on the relation objects. It turns the relation objects in to ARel objects which represent the SQL query AST. Finally, it converts the AST to an actually SQL string and passes that string to the database.

These same transformations happen when you run something like Post.find(id), or Post.find_by_name(name).

Separating Static Data

Let’s consider this statement:


In previous versions of Rails, when this code was executed, if you watched your log files, you would see something like this go by:

SELECT * FROM posts WHERE id = 10
SELECT * FROM posts WHERE id = 12
SELECT * FROM posts WHERE id = 22
SELECT * FROM posts WHERE id = 33

In later versions of Rails, you would see log messages that looked something like this:

SELECT * FROM posts WHERE id = ? [id, 10]
SELECT * FROM posts WHERE id = ? [id, 12]
SELECT * FROM posts WHERE id = ? [id, 22]
SELECT * FROM posts WHERE id = ? [id, 33]

This is because we started separating the dynamic parts of the SQL statement from the static parts of the SQL statement. In the first log file, the SQL statement changed on every call. In the second log file, you see the SQL statement never changes.

Now, the problem is that even though the SQL statement never changes, Active Record still performs all the translations we discussed above. In order to gain speed, what do we do when a known input always produces the same output? Cache the computation.

Keeping the static data separated from the dynamic data allows AdequateRecord to cache the static data computations. What’s even more cool is that even databases that don’t support prepared statements will see an improvement.

Supported Forms

Not every call can benefit from this caching. Right now the only forms that are supported look like this:

Post.find_by(name: name)

This is because calculating a cache key for these calls is extremely easy. We know these statements don’t do any joins, have any “OR” clauses, etc. Both of these statements indicate the table to query, the columns to select, and the where clauses right in the Ruby code.

This isn’t to say that queries like this:


can’t benefit from the same techniques. In those cases we just need to be smarter about calculating our cache keys. Also, this type of query will never be able to match speeds with the find_by_XXX form because the find_by_XXX form can completely skip creating the ActiveRecord::Relation objects. The “finder” form is able to skip the translation process completely.

Using the “chained where” form will always create the relation objects, and we would have to calculate our cache key from those. In the “chained where” form, we could possibly skip the “relation -> AST” and “AST -> SQL statement” translations, but you still have to pay the price of allocating ActiveRecord::Relation objects.

When can I use this?

You can try the code now by using the adequaterecord branch on GitHub. I think we will merge this code to the master branch after Rails 4.1 has been released.

What’s next?

Before merging this to master, I’d like to do this:

  1. The current incarnation of AdequateRecord needs to be refactored a bit. I have finished the “red” and “green” phases, and now it’s time for the “refactor” step.
  2. The cache should probably be an LRU. Right now, it just caches all of the things, when we should probably be smarter about cache expiry. The cache should be bounded by number of tables and combination of columns, but that may get too large.

After merging to master I’d like to start exploring how we can integrate this cache to the “chained where” form.

On A Personal Note

Feel free to quit reading now. :-)

The truth is, I’ve been yak shaving on this performance improvement for years. I knew it was possible in theory, but the code was too complex. Finally I’ve payed off enough technical debt to the point that I was able to make this improvement a reality. Working on this code was at times extremely depressing. Paying technical debt is really not fun, but at least it is very challenging. Some time I will blurrrgh about it, but not today!

Thanks to work (AT&T) for giving me the time to do this. I think we can make the next release of Rails (the release after 4.1) the fastest version ever.

EDIT: I forgot to add that newer Post.find_by(name: name) syntax is supported, so I put it in the examples.

read more »

Me And Facebook Open Academy

TL;DR: I am working with many students to improve Rails and it is fun!

WARNING: This is a non-technical blog post. I usually like to write about tech stuff, but I think this is pertinent so I’m going to write about it!

What is the Open Academy?

Facebook Open Academy is basically a program that gives Computer Science students at different universities the opportunity to work with various projects in the Open Source community. I think there are about 20 Open Source projects involved, and Rails is one of them. This year there are about 250 students participating, and the Rails team has 22 students. This is the second year that the Open Academy has been running, and I participated last year too.

How I got involved

I was contacted by Jay Borenstein who is a professor at Stanford and also works for Facebook. He told me that the idea of the program is that there are many CS students learning theory, but maybe not learning the skills they need to work on older code bases, with remote teams, or with Open Source code bases, and that he wanted to provide students with the opportunity to work closely with people in the Open Source community so that they could get these skills.

This struck a chord with me because I had to learn this stuff on my own, and I would have loved to have this kind of opportunity. For payment, I asked Jay to give me an Honorary Degree from Stanford. He has not gotten back to me on that.

Our Team

I’m not sure how many schools are involved total, but our team of 22 students is from 8 different schools: Sichuan University, University of Helsinki, Cornell University, Harvard, Princeton, University of Waterloo, UC Berkeley, and University of Washington.

I am extremely honored to work with these students, but I have to admit that as a college dropout it is extremely intimidating. The students we’re working with are very bright and have lots of potential. It’s inspiring to work with them, and I am slightly jealous because they are much better than I was at their age. I think this is a good thing because it means that the next generation of programmers will be even better than my generation.

Last year I was the only mentor. This year I was able to trick convince @_matthewd, @pixeltrix, and @bitsweat to help out as mentors.

Our Timeline

We started a couple weeks ago. But this weekend Facebook sponsored travel and lodging for all students, faculty, and mentors, to go to Palo Alto for a hackathon. So this weekend I was able to work with our team in person.

The timeline for this program is rather challenging because each school has a different schedule, so we will be working with all students for a different length of time.

Our Work

For the first few weeks we will have the students work on bug fixes in Rails. This will help us to assess their skill levels so that we can give them longer term projects that are a good fit. Fixing bugs has turned out to be a challenge because most “easy” bugs in Rails have been fixed, so most of the bugs left in the tracker are particularly challenging. I haven’t figured out a solution to this problem yet, but we are managing (I think). So if you find easy bugs in Rails, don’t fix them, but tell my students to fix them. ;-)

Anyway, after a couple weeks we will give them larger projects to work on (and I’ll describe those in a later post).

Improvements for the Rails team

Getting the students up and running with a dev environment was a huge challenge. Some students tried the Rails dev box, but running a VM on their hardware was just too slow, so we ended up with all of the students developing on their native hardware. Of course this was also a pain because some students had issues like having Fink, MacPorts, and homebrew all installed at the same time. For the Agile Web Development with Rails book, Sam Ruby has tests that start with a completely blank Ubuntu VM, and go all the way through getting a Rails app up and running. I think we need to do something like that, but for doing dev work against Rails.

One extremely frustrating example of why we need tests for setting up the dev environment is that in the guides it says that you should execute bundle install --without db. This is fine, except that when you do that, the --without db option is written to .bundle/config and persists between runs of bundler. We (the mentors) didn’t know the students had done that, and were ripping our hair out for an hour or so trying to figure out why ActiveRecord couldn’t find the adapter gems, or why running bundle install wouldn’t install the database gems. I find this to be an extremely frustrating behavior of bundler.

These are some things that I think we can improve.

Improvements for Universities

All of our students can program. Maybe not in Ruby, but they know some other language. They have to pick up Ruby, but I don’t think that’s a big deal. A larger problem is that not all of them knew about git, or about VCS in general. Jay tried to help with this problem by inviting @chacon to give a talk on git. I think this helped, but I really think that git should be part of normal CS curriculum. It’s very difficult for me to imagine writing any amount of code without some kind of VCS. What if you mess up? What if your computer dies? How do you remember what the code used to be? Not using a VCS seems like a huge waste of time. Some of the students were already familiar with git, but some weren’t. I am unclear about the role VCS plays among Universities, but I strongly believe it should be taught in all CS programs.

Maybe I can get people at GitHub in contact with people at different Universities?

Departing thoughts

It blows my mind that we have students from such prestigious Universities, and that these Universities are forward thinking enough to participate in this program. I am extremely honored to work with them. I also have to say thank you to Facebook for sponsoring this work even though Rails has basically nothing to do with Facebook. I also have to say thanks to my work (AT&T) for allowing me to have the job that I do, and also giving me some time to work with these students.

All of our students are part of the Friends of Rails organization on GitHub so you can watch some of our progress there. I will try to make more blurrrrgggghhh posts as we progress.

Post Script

Since I am a College dropout, I have an extreme case of Impostor Syndrome. Working with such prestigious schools made me extremely nervous. Last year I went to dinner with some of the faculty members of University of Helsinki. After a few beers, I was able to muster my courage and said “Before we start this program, I have to admit something. I never graduated college.” Their response: “So?”


read more »

Enabling ttyO1 on BeagleBone

I’m really just making this post because I will eventually forget how to do this, and maybe it will come in handy to others as well.


I’ve got an MSP430 reading temperature and humidity information from an RHT03 sensor. It sends the data over a serial port, and I’d like to read that data on my BeagleBone.

I have P1.1 and P1.2 on the launchpad connected to pin 24 and 26 on the BeagleBone. Pin 24 and 26 map to UART1, so I need to enable UART1 on the BeagleBone so it will show up in /dev.


The BeagleBone is running Angstrom v2012.12, and I believe this is the same version that is on the BeagleBone Black, so it should work there as well:

root@beaglebone:~# uname -a
Linux beaglebone 3.8.13 #1 SMP Tue Jun 18 02:11:09 EDT 2013 armv7l GNU/Linux
root@beaglebone:~# cat /etc/version 
Angstrom v2012.12


To enable the UART, you just need to do this:

root@beaglebone:~# echo BB-UART1 > /sys/devices/bone_capemgr.*/slots

After running that, there should be a message in dmesg about enabling the UART, and /dev/ttyO1 should be available.

Where does BB-UART1 come from?

“BB-UART1” is the name of a device cape, and it seems Angstrom already ships with a bunch of these. If you look under /lib/firmware, there will be a bunch of them. “BB-UART1” came from the file /lib/firmware/BB-UART1-00A0.dts. If you don’t have that file, then echoing to the slots file will not work.

Enabling at boot

I want the tty to be available every time I boot the machine. Angstrom doesn’t use normal System V init scripts, so you have to do something different. You need two files, and a symbolic link.

First I created /usr/local/bin/enable_uart1, and it looks like this:

root@beaglebone:/# cat /usr/local/bin/enable_uart1 

echo BB-UART1 > /sys/devices/bone_capemgr.7/slots


(make sure enable_uart1 is executable).

Then I created /lib/systemd/enable_uart1.service, and it looks like this:

root@beaglebone:/# cat /lib/systemd/enable_uart1.service
Description=Enable UART1
After=syslog.target network.target



Then I created a symbolic link:

root@beaglebone:/# cd /etc/systemd/system/
root@beaglebone:/# ln /lib/systemd/enable_uart1.service enable_uart1.service

Then I loaded and enabled the service:

root@beaglebone:/# systemctl daemon-reload
root@beaglebone:/# systemctl start enable_uart1.service
root@beaglebone:/# systemctl enable enable_uart1.service

After running these commands, /dev/ttyO1 should be available even after rebooting the machine.

(Most of this systemd information came from this blog post)


Next I need to get wifi working, and my BeagleBone will be perfect for real-time monitoring of Sausage Box One.

read more »

One Danger of Freedom Patches

I posted a benchmark on twitter about comparing a DateTime with a string. This is a short blurrrggghhh post about the benchmark and why there is such a performance discrepancy.

Here is the benchmark:

require 'benchmark/ips'
require 'active_support/all' if ENV['AS']
require 'date'

now = DateTime.now

Benchmark.ips do |x|
  x.report("lhs") { now == "foo" }
  x.report("rhs") { "foo" == now }

First we’ll run the benchmark without Active Support, then we’ll run the benchmark with Active Support.

The Benchmarks

Without Active Support

[aaron@higgins rails (master)]$ bundle exec ruby argh.rb 
Calculating -------------------------------------
                 lhs     57389 i/100ms
                 rhs     76222 i/100ms
                 lhs  2020064.6 (±14.7%) i/s -    9870908 in   5.015172s
                 rhs  3066573.4 (±13.2%) i/s -   15091956 in   5.012879s
[aaron@higgins rails (master)]$

With Active Support

[aaron@higgins rails (master)]$ AS=1 bundle exec ruby argh.rb 
Calculating -------------------------------------
                 lhs      4786 i/100ms
                 rhs     26327 i/100ms
                 lhs    62858.4 (±23.6%) i/s -     296732 in   5.019005s
                 rhs  2866546.6 (±26.6%) i/s -   13031865 in   4.996482s
[aaron@higgins rails (master)]$


In the benchmarks without Active Support, the performance is fairly close. The standard deviation is pretty big, but the numbers are within the ballpark of each other.

In the benchmarks with Active Support, the difference is enormous. It’s not even close. Why is this?

What is the deal?

This speed difference is due to a Freedom Patch that Active Support applies to the DateTime class:

class DateTime
  # Layers additional behavior on DateTime#<=> so that Time and
  # ActiveSupport::TimeWithZone instances can be compared with a DateTime.
  def <=>(other)
    super other.to_datetime

DateTime includes the Comparable module which will call the <=> method whenever you call the == method. This Freedom Patch calls to_datetime on whatever is on the right hand side of the comparison. Rails Monkey Patches the String class to add a to_datetime method, but “foo” is not a valid Date, so it raises an exception.

The Comparable module rescues any exception that happens inside <=> and returns false. This means that any time you call DateTime#== with something that doesn’t respond to to_datetime, an exception is raised and immediately thrown away.

The original implementation just does object equality comparisons, returns false, and it’s done. This is why the original implementation is so much faster than the implementation with the Freedom Patch.

My 2 Cents

These are the dangers of Freedom Patching. As a Rails Core member, I know this is a controversial opinion, but I am not a fan of Freedom Patching. It seems convienient until one day you wonder why is this code:

date == "foo"

so much slower than this code?

"foo" == date

Freedom Patching hides complexity behind a familiar syntax. It flips your world upside down; making code that seems reasonable do something unexpected. When it comes to code, I do not like the unexpected.

EDIT: I clarified the section about strings raising an exception. The actual exception occurrs in another monkeypatch in Rails.

read more »

Dynamic Method Definitions

TL;DR: depending on your app, using define_method is faster on boot, consumes less memory, and probably doesn’t significantly impact performance.

Throughout the Rails code base, I typically see dynamic methods defined using class_eval. What I mean by “dynamic methods” is methods with names or bodies that are calculated at runtime, then defined.

For example, something like this:

class Foo
  class_eval <<EORUBY, __FILE__, __LINE__ + 1
    def wow_#{Time.now.to_i}
      # ...

I’m not sure why they are define this way versus using define_method. Why don’t we compare and contrast defining methods using class_eval and define_method?

The tests I’ll do here use MRI, Ruby 2.0.0.

Definition Performance

When defining a method, is it faster to use class_eval or define_method? Here is a trivial benchmark where we simulate defining 100,000 methods:

require 'benchmark'


N = 100000
Benchmark.bm(13) do |x|
  x.report("define_method") {
    class Foo
      N.times { |i| define_method("foo_#{i}") { } }

  x.report("class_eval") {
    class Bar
      N.times { |i|
        class_eval <<-eoruby, __FILE__, __LINE__ + 1
          def bar_#{i}

Results on my machine:

$ ruby test.rb
                    user     system      total        real
define_method   0.290000   0.030000   0.320000 (  0.318222)
class_eval      1.300000   0.120000   1.420000 (  1.518075)

The class_eval version seems to be much slower than the define_method version.

Why is definition performance different?

The reason performance is different is that on each call to class_eval, MRI creates a new parser and parses the string. In the define_method case, the parser is only run once.

We can see when the parser executes using DTrace. We will compare two programs, one with class_eval:

class Foo
  5.times do |i|
    class_eval "def f_#{i}; end", __FILE__, __LINE__

and one with define_method:

class Foo
  5.times do |i|
    define_method("f_#{i}") { }

Using DTrace, we can monitor the parse-begin probe which fires before MRI runs it’s parser and compiles instruction sequences:

/copyinstr(arg0) == "test.rb"/
  printf("%s:%d\n", copyinstr(arg0), arg1);

Run DTrace using the define_method program:

$ sudo dtrace -q -s x.d -c"$(rbenv which ruby) test.rb"

Now run again with the class_eval version:

$ sudo dtrace -q -s x.d -c"$(rbenv which ruby) test.rb"

In the class_eval version, the parser runs and compiles instruction sequences 6 times, where the define_method case only runs once.

Call speed

It seems it’s faster to define methods via define_method, but which method is faster to call? Let’s try with a trivial example:

require 'benchmark/ips'


class Foo
  define_method("foo") { }
  class_eval 'def bar; end'

Benchmark.ips do |x|
  foo = Foo.new
  x.report("class_eval") { foo.bar }
  x.report("define_method") { foo.foo }

Here are the results on my machine:

$ ruby test.rb
Calculating -------------------------------------
          class_eval    115154 i/100ms
       define_method    106872 i/100ms
          class_eval  7454955.2 (±5.0%) i/s -   37194742 in   5.004418s
       define_method  5061216.4 (±5.2%) i/s -   25221792 in   5.000041s

Clearly methods defined with class_eval are faster. But does it matter? Let’s try a test where we add a little work to each method:

require 'benchmark/ips'


class Foo
  define_method("foo") { 10.times.map { "foo".length } }
  class_eval 'def bar; 10.times.map { "foo".length }; end'

Benchmark.ips do |x|
  foo = Foo.new
  x.report("define_method") { foo.foo }
  x.report("class_eval") { foo.bar }

Running these on my machine, I get:

$ ruby test.rb
Calculating -------------------------------------
       define_method     23949 i/100ms
          class_eval     23015 i/100ms
       define_method   261039.7 (±6.3%) i/s -    1317195 in   5.066215s
          class_eval   228819.7 (±12.2%) i/s -    1150750 in   5.286635s

A small amount of work is enough to overcome the performance difference between them.

How about memory consumption?

Let’s compare class_eval and define_method on memory. We’ll use this program to compare maximum RSS for N methods:

N = (ENV['N'] || 100_000).to_i

class Foo
  N.times do |i|
    if ENV['EVAL']
      class_eval "def bar_#{i}; end"
      define_method("bar_#{i}") { }

Here are the results (I’ve trimmed them a little for clarity):

$ EVAL=1 time -l ruby test.rb
        3.77 real         3.68 user         0.08 sys
 127389696  maximum resident set size
         0  average shared memory size
         0  average unshared data size
         0  average unshared stack size
     38716  page reclaims
$ DEFN=1 time -l ruby test.rb
        0.69 real         0.63 user         0.05 sys
  69103616  maximum resident set size
         0  average shared memory size
         0  average unshared data size
         0  average unshared stack size
     24487  page reclaims

The maximum RSS for the class_eval version is much higher than the define_method version. Why?

I mentioned earlier that the class_eval version instantiates a new parser and compiles the source. Each method definition in the class_eval version does not share instruction sequences, where the define_method version does.

Let’s verify this claim by using ObjectSpace.memsize_of_all!

Measuring Instructions

MRI will let us measure the total memory usage of the instruction sequences. Here we’ll modify the previous program to measure the instruction sequence size (in bytes) after defining many methods:

require 'objspace'

N = (ENV['N'] || 100_000).to_i

class Foo
  N.times do |i|
    if ENV['EVAL']
      class_eval "def bar_#{i}; end"
      define_method("bar_#{i}") { }


p ObjectSpace.memsize_of_all(RubyVM::InstructionSequence)

Let’s see the difference:

$ EVAL=1 ruby test.rb
$ DEFN=1 ruby test.rb

Growth Rate

Now let’s see the growth rate between the two. Here is the growth rate for the class_eval case:

$ N=100 EVAL=1 ruby test.rb
$ N=1000 EVAL=1 ruby test.rb
$ N=10000 EVAL=1 ruby test.rb
$ N=100000 EVAL=1 ruby test.rb

Now let’s compare to the define_method case:

$ N=100 DEFN=1 ruby test.rb
$ N=1000 DEFN=1 ruby test.rb
$ N=10000 DEFN=1 ruby test.rb
$ N=100000 DEFN=1 ruby test.rb

The memory consumed by instruction sequences in the class_eval case continually grows, where in the define_method case it does not. MRI reuses the instruction sequences in the case of define_method, so we see no growth.


Defining methods with define_method is faster, consumes less memory, and depending on your application isn’t significantly slower than using a class_eval defined method. So what is the down side?


The main down side is that define_method creates a closure. The closure could hold references to large objects, and those large objects will never be garbage collected. For example:

class Foo
  x = "X" * 1024000 # Not GC'd
  define_method("foo") { }

class Bar
  x = "X" * 1024000 # Is GC'd
  class_eval("def foo; end")

The closure could access the local variable x in the Foo class, so that variable cannot be garbage collected.

When using define_method be careful not to hold references to objects you don’t care about.


I hope you enjoyed this! <3<3<3<3

read more »

YAML f7u12

YAML seems to be getting a bad rap lately, and I’m not surprised. YAML was used as the attack vector to execute arbitrary code in a Rails process and was even used to steal secrets from rubygems.org.

Let’s try to dissect the attack vector used, and see how YAML fits in to the picture.

The Metasploit Exploit

First lets cover the most widely known vector. We (the Rails Security Team) have had reports of attempts to use the exploit on several websites, it’s in metasploit, and a variant was used to attack rubygems.org.

The Troubled Code

I’m going to boil down the code involved in order to make the attack more easy to digest. In Rails, there is a class defined that basically boils down to this definition:

class Helpers
  def initialize
    @module = Module.new

  def []=(key, value)
    @module.module_eval <<-END_EVAL
      def #{value}(*args)
        # ... other stuff

This class defines routing helper methods on a module, and later this module is mixed in to your views. Let’s take a look at how to use this code to teach Linux Zealot an important lesson in security.


Our attacker knows that this class is defined in the system. Using YAML, along with Psych’s object deserialization, they can inject any object in to the system they choose. So how can they use this object? Let’s take a look at the YAML payload for exploiting this code, then talk about how it works:

--- !ruby/hash:Helpers
foo: |-
  mname; end; puts 'hello!'; def oops

We can clearly see the Ruby code in this YAML, but how does it get executed?

When Psych looks at the type declared !ruby/hash:Helpers, it says “ah, this is a subclass of a Ruby hash with the class of Helpers. So, it allocates a new Helpers class, then calls the []= method for each of the key value pairs in the YAML.

In this case, the key value pair is:

['foo', "mname; end; puts 'hello!'; def oops"]

Let’s take the value passed in, and do string substitution in the module_eval part of the code:

def mname; end; puts 'hello!'; def oops(*args)
  # ... other stuff

It’s kind of hard for Humans to read, so let’s add some newlines:

def mname

puts 'hello!'

def oops(*args)
  # ... other stuff

And now it should be pretty clear how an attacker can execute arbitrary code.

How do we fix this?

We have a few options for fixing this.

  1. Replace our module_eval with a define_method
  2. Change Psych to check super classes and ensure it’s a hash
  3. Stop using normal Ruby methods to set hash key / value pairs

Let’s say we did all of these. Are we safe? No.

Proxy Exploits

This exploit was reported to the Rails Security team by Ben Murphy. It uses “proxy objects” (basically anything with a method_missing implementation that calls to send) to execute arbitrary code on the remote server.

The Troubled Code

Again, this example is boiled down to make it easier to understand. Let’s say our system has a proxy object like this:

class Proxy
  def initialize(server, arg1, meth)
    @server = server
    @arg1   = arg1
    @meth   = meth

  def method_missing(mid, *args)
    @server.send(@meth, @arg1, *args)

This proxy is used to forward messages to some other object (in this case @server).


Our attacker knows that this class exists in our system. However, merely instantiating this object isn’t enough. The application code must call a method (any method) on this object before the system is compromised. In the case of Rails, the attacker knows that the YAML objects will be embedded in the parameters hash, so things like:

### etc

will set this one off. So what does the YAML payload look like?

--- !ruby/object:Proxy
server: !ruby/module 'Kernel'
meth: eval
arg1: puts :omg

This particular form is a normal object load. However, the attacker sets the instance variables on the proxy object to convenient values. When the proxy is instantiated, @server will be Kernel, @meth will be "eval", and @arg1 will be "puts :omg".

When the application calls any method on the object, method_missing will trigger. If we expand out the instance variables in method_missing, it will look like this:

Kernel.send("eval", "puts :omg")

Again we can see that arbitrary code can be executed.

This exploit is arguably more insidious than the previous exploit. In the previous exploit it was obvious that the app code that contained a call to eval could possibly execute arbitrary code. This exploit was able to trick our system in to calling eval even though the app code never explicitly eval’d anything!

You might be thinking “this proxy can’t be common”. Unfortunately there are classes similar to this in XMLRPC as well as Rack via a combination of ERB templates.

How do we fix this?

We applied our previous fixes, but our code was still susceptible to attacks. What do we do now? Why don’t we change Psych to:

  1. Only accept whitelisted classes
  2. But still allow Ruby “primitives” by default

Are we safe now? I guess that depends on what you mean by “safe”. Assuming that no object ever evals anything that’s set on your whitelisted classes, you may be safe from arbitrary code execution. But are you safe from other attacks?


Here’s a grab bag of attacks you can use even if “primitives” are allowed. Let your friends know how much you really love them with these tricks!

Eating up Object Space

In Ruby, symbols are never garbage collected. To DoS a server, simply send many Ruby symbols in YAML format:

- :foo
- :bar
- :baz
# etc

Infinite Loops!

Know a place where someone is doing this?

class Foo
  def some_method
    @ivar.each do |x|
      # ...

Teach them about infinite ranges! Psych will deserialize this YAML to a range from 1 to infinity on the object.

--- !ruby/object:Foo
ivar: !ruby/range
  begin: 1
  end: .inf
  excl: false

The @ivar.each call will have a fun time looping forever!

Infinite Recursion!

Have a friend that recurses data structures like this:

class Foo
  def some_method
    stack = [@foo]

    until stack.empty?
      y = stack.pop

      if Array === y
        stack.push y
        process y

Send them this little present:

--- !ruby/object:Foo
foo: &70143145831360
- *70143145831360

They’ll be so excited! @foo will result in an array that contains itself. The loop will be processing this stack for quite some time! Good times!

Pathological Regular Expressions

Your friend has given up on iterations. Just totally stopped. How can you show them how much you love them? I know! Send a pathological regular expression!

You know your friend is doing string matches:

class Foo
  def initialize
    @match = /ok/

  def ok?
    @match =~ 'aaaaaaaaaaaaaaaaaaaaaaaaadaaaac'

So you send a payload like this:

--- !ruby/object:Foo
match: !ruby/regexp /(b|a+)*c/

Loading this will result in a regular expression that takes an extremely long time to match the string. We can show our love by easily making every web process work hard on a bad regular expression!


So now we know that custom classes could result in evaled code, so we have to whitelist only “safe” classes. Now that we’ve whitelisted our safe classes, we need to make sure we don’t load symbols. After we make sure symbols are disallowed, we have to ensure loaded ranges are bounded. After we ensure loaded ranges are bounded, we have to check for self referential hashes and arrays, and the list goes on.

We’ve adjusted our code to make all these checks and balances. But we’ve only examined a handful of the Ruby “primitives” that are available. After examining only these few cases, are we sure that loading any other Ruby “primitive” should be considered safe?

I’m not.


People are asking for a safe_load from Psych. But the question is: “what does safe mean?”. Some say “only prevent foreign code from being executed”, but does that mean we’re safe?

To me it doesn’t. To me, “safe” means something that is:

  1. Easy to understand.
  2. Conservative.
  3. Easy to extend.

I propose that the meaning of “safe load” would only load Null, Booleans, Numerics, Strings, Arrays, Hashes, and no self-referential data structures. This is easy to understand. You only need to know about 6 data types, not a laundry list of possible classes.

I’d prefer to stay conservative, not playing whack-a-mole when someone figures out how to exploit another class. Keeping the number of supprted data types low prevents playing whack-a-mole.

If you really need to load other types, just add the class to the whitelist when calling safe_load. It should really be “that easy”. You explicitly know the types that will be loaded, so the possible values returned only grow when you say so.

YAML Postmortem

This section isn’t actually a postmortem (YAML isn’t dead), it’s actually just a postscript section, so you can stop reading now. I just wanted to call it “postmortem” because I think it’s funny when people have postmortems about software.

People seem to be giving YAML a bad rap over these exploits. The truth is that all of these exploits exist in any scheme that allows you to create arbitrary objects in a target system. This is why nobody uses Marshal to send objects around. Think of YAML as a human readable Marshal.

Should we stop using YAML? No. But we probably shouldn’t use it to read foreign data. Can we make Psych safe? As I said earlier, it depends on what you think “safe” means. My opinion of “safe” puts YAML on the same field as JSON as far as “objects that can be transferred” is concerned.

Anyway, I think it’s important to see we have three things going on in these exploits. We have YAML the langauge, which defines schemes for arbitrary object serialization, Psych which honors those requests, and user land code which is subject to the exploits. YAML the language doesn’t say any of this code should be executed, and in fact Psych won’t eval random input. The problem being that certain YAML documents can be fed to Psych to create objects that interact with user code in unexpected ways.

The user land code is what gets exploited, YAML and Psych are merely a vehicle. But asking users to remove all cases of module_eval or method_missing + send and to require boundry checks, etc is completely unreasonable.

This is why we need a YAML.safe_load.

read more »

Rails 4 and your Gemfile

TL;DR This is your periodic reminder to specify dependency versions in your Gemfile

I started updating one of our larger projects at work to use edge Rails. This project uses devise, and the Gemfile declares the dependency like this:

gem "devise"

The latest version of devise correctly declares its dependency on Railties on ~> 3.1:

Devise depends on Rails 3.1

However, Devise version 1.5.3 does not declare a specific dependency on Rails (or Railties):

Devise version 1.5.3

This means that as I upgrade this application, doing a bundle update pulls in Devise version 1.5.3. This version of Devise is incompatible with the app’s codebase. How do you fix it? Update the Gemfile to include the version number (just like rubygems.org recommends) like this:

gem "devise", "~> 2.1.2"

Bundling against Rails 4.0 will fail, but at least it will be a fail during bundle time and not during runtime:

Bundler could not find compatible versions for gem "railties":
  In Gemfile:
    devise (~> 2.1.2) ruby depends on
      railties (~> 3.1) ruby

    rails (>= 0) ruby depends on
      railties (4.0.0.beta)

Time to find what other version issues await!


read more »

Protected Methods and Ruby 2.0

TL;DR: respond_to? will return false for protected methods in Ruby 2.0

Let’s check out how protected and private methods behave in Ruby. After that, we’ll look at how Ruby 2.0 changes could possibly break your code (and what to do about it).

Method Visibility

In Ruby, we have three visibilities: public, protected, and private. Let’s define a class with all three:

class Heart
  def public_method; end

  def protected_method; end

  def private_method; end

First, let’s see how these differ from within the Heart class.

Internal Visibility

Inside the Heart class, we can call any of these methods with an implicit recipient. In other words, this method will not raise exceptions (note that I’m just reopening the Heart class for demonstration):

class Heart
  def ok!

Public and protected methods can be called with an explicit recipient, but private methods cannot. So the following code will raise an exception on the third line of the method body:

class Heart
  def not_ok!
    self.public_method    # OK
    self.protected_method # OK
    self.private_method   # raises NoMethodError

External Visibility

Outside the Heart class, we can only call the public methods:

irb(main):032:0> heart = Heart.new
=> #<Heart:0x007fdad1952f78>
irb(main):033:0> heart.public_method    # => nil
irb(main):034:0> heart.protected_method # => raises NoMethodError
irb(main):035:0> heart.private_method   # => raises NoMethodError

One notable exception is if the object sending the message is of the same type as the object receiving the message, then it’s OK to call protected methods.

Here is an example:

class Hands < Heart
  def call_stuff r
    r.public_method    # => ok!
    r.protected_method # => ok, but only if self.is_a?(r.class)
    r.private_method   # => raises NoMethodError

I find this behavior to be most useful when implementing equality operators. For example:

class A
  def == other
    if self.class == other.class
      internal == other.internal

  def internal; :a; end


Finally, let’s look at respond_to?. The behavior of this method is changing in Ruby 2.0.0. First we’ll look at the behavior in 1.9, then how it changes in Ruby 2.0.0.

The respond_to? method will return true if the object responds to the given method. Let’s call respond_to? on our Heart object (with Ruby 1.9) and see what it returns:

1.9.3-p194 :010 > heart = Heart.new
 => #<Heart:0x007faaaa14e450> 
1.9.3-p194 :011 > heart.respond_to? :public_method    # => true 
1.9.3-p194 :012 > heart.respond_to? :protected_method # => true 
1.9.3-p194 :013 > heart.respond_to? :private_method   # => false 

Ruby 1.9 will return true for public and protected methods, but false for private methods. If we compare this to actually calling the method, we’ll see an inconsistent behavior. Let’s interleave respond_to? checks along with calling the method to see what happens:

1.9.3-p194 :014 > heart = Heart.new
 => #<Heart:0x007faaaa16d080> 
1.9.3-p194 :015 > heart.respond_to? :public_method    # => true 
1.9.3-p194 :016 > heart.public_method                 # => nil
1.9.3-p194 :017 > heart.respond_to? :protected_method # => true
1.9.3-p194 :018 > heart.protected_method              # => NoMethodError
1.9.3-p194 :019 > heart.respond_to? :private_method   # => false
1.9.3-p194 :020 > heart.private_method                # => NoMethodError

So, despite the fact that respond_to? returns true for the protected method, we cannot actually call that method.

Introspection (in Ruby 2.0.0)

In Ruby 2.0.0, respond_to? has changed. It no longer returns true for protected methods. Let’s look at our Heart example again, but this time with Ruby 2.0.0:

irb(main):013:0> heart = Heart.new
=> #<Heart:0x007fce0b09a188>
irb(main):014:0> heart.respond_to? :public_method    # => true
irb(main):015:0> heart.public_method                 # => nil
irb(main):016:0> heart.respond_to? :protected_method # => false
irb(main):017:0> heart.protected_method              # => NoMethodError
irb(main):018:0> heart.respond_to? :private_method   # => false
irb(main):019:0> heart.private_method                # => NoMethodError

The behavior of respond_to? lines up with the reality of calling the method in Ruby 2.0.0.

Caveats on Reality

The changes to respond_to? also apply inside our “same instances” case. Let’s use this class as an example:

class A
  def == a
    puts a.respond_to? :zoom!
    puts a.zoom!

  def zoom!; :a; end

If we run the following code in Ruby 2.0.0, the call to respond_to? will return false despite the fact that we can actually call the method:

irb(main):029:0> A.new == A.new
=> nil

I’m not sure this is a big problem because we should be checking ancestors in the comparator methods. If we check that the ancestors are the same, then the respond_to? calls become unnecessary. Also 99% of the objects I write don’t implement object comparator methods.


Most of the problems I’ve found in the Rails code base relating to respond_to? were fixed by either changing the visibility of the method, or calling respond_to? with a true as the second argument. In 1.9, the true tells Ruby to search private methods, and in 2.0, private and protected methods.

For library authors, dealing with this change depends on the situation. For example, if you have code like this:

def some_method other
  if other.respond_to?(:foo)

Consider forcing the other object to have the method foo, and the super class of the foo instance implementing some_default_behavior.

If you expect foo to be a protected method, consider changing to is_a? checks, or passing true to respond_to?. Passing true could result in false positives, but I haven’t personally encountered that as a problem (yet).

Happy Hacking! <3<3<3<3

read more »

Is it live?

TL;DR Rails 4.0 will allow you to stream arbitrary data at arbitrary intervals with Live Streaming.


Besides enabling multi-threading by default, one of the things I really wanted for Rails 4.0 is the ability to stream data to the client. I want the ability to treat the response object as an I/O object, and have the data I write immediately available to the client. Essentially, the ability to deliver whatever data I want, whenever I want.

Last night I merged a patch to master that allows us to do exactly that: send arbitrary data in real time to the client. In this article, I would like to show off the feature by developing a small application that automatically refreshes the page when a file is written. I’ll be working against edge Rails, specifically against this commit (hopefully then people in the future will notice if / when this article is out of date!)

Here is a video of the final product we’ll build in this article:

Response Streaming

The first thing I added was a “stream” object to the response. This object is where where data will be buffered until it is sent to the client. The stream object is meant to quack like an IO object. For example:

class MyController < ActionController::Base
  def index
    100.times {
      response.stream.write "hello world\n"

In order to maintain backwards compatibility, the above code will work, but it will not stream data to the client. It will buffer the data until the response is completed, then send everything at the same time.

Live Streaming

To make live streaming actually work, I added a module called ActionController::Live. If you mix this module in to your controller, all actions in that controller can stream data to the client in real time. We can make the above MyController example live stream by mixing in the module like so:

class MyController < ActionController::Base
  include ActionController::Live

  def index
    100.times {
      response.stream.write "hello world\n"

The code in our action stays exactly the same, but this time the data will be streamed to the client as every time we call the write method.


Before we start on our little example project, we should talk a bit about web servers. By default, script/rails server uses WEBrick. The Rack adapter for WEBrick buffers all output in a way we cannot bypass, so developing this example with script/rails server will not work.

We could use Unicorn, but it is meant for fast responses. Unicorn will kill our connection after 30 seconds. The protocol we’re going to use actually makes this behavior irrelevant, but it’s a bit annoying to see the logs.

For this project, I think the best webserver would be either Rainbows!, Puma, or Thin. I’ve been playing with Puma a lot lately, so I’ll use it in this example.

Our application

We’re going to build an application that automatically reloads the page whenever a file is saved. You can find the final repository here.


For this project we’re going to use edge Rails and Live Streaming, along with a bit of JavaScript and Server-Sent Events. To detect file system changes, we’re going to use the rb-fsevent gem. I think this gem only works on OS X, but it should be easy to translate this project to Linux or Windows given the right library.

Server-Sent Events

If you’ve never heard of Server-Sent Events (from here on I’ll call them SSEs), it’s a feature of HTML5 that allows long polling, but is built in to the browser. Basically, the browser keeps a connection open to the server, and fires an event in JavaScript every time the server sends data. An example event looks like this:

id: 12345\n
event: some_channel\n
data: {"hello":"world"}\n\n

Messages are delimited by two newlines. The data field is the event’s payload. In this example, I’ve just embedded some JSON data in the payload. The event field is the name of the event to fire in JavaScript. The id field should be a unique id of the message. SSE does automatic reconnection; if the connection is lost, the browser will automatically try to reconnect. If an id has been provided with your messages, when the browser attempts to reconnect, it will send a header (Last-Event-ID) to the server allowing you to pick up where you left off.

We’re going to build a controller that emits SSEs and tells the browser to refresh the page.

Getting Started

The first thing we’ll do is generate a new Rails project from the Rails git repository (I keep all my git repos in ~/git):

$ cd ~/git/rails
$ ruby railties/bin/rails new ~/git/reloader --dev
$ cd ~/git/reloader

Update the Gemfile to include puma and rb-fsevent and re-bundle:

diff --git a/Gemfile b/Gemfile
index 9e075a8..51ce01c 100644
--- a/Gemfile
+++ b/Gemfile
@@ -6,6 +6,8 @@ gem 'arel',      github: 'rails/arel'
 gem 'active_record_deprecated_finders', github: 'rails/active_record_deprecated_finders'
 gem 'sqlite3'
+gem 'puma'
+gem 'rb-fsevent'
 # Gems used only for assets and not required
 # in production environments by default.

Then we’ll generate a controller for emitting SSE messages to the browser:

$ ruby script/rails g controller browser

Moving on!

Generating SSEs

I’d like an object that knows how to format messages as SSE and emits those messages to the live stream. To do this, we’ll write a small class that decorates the output stream and knows how to dump objects as SSEs:

require 'json'

module Reloader
  class SSE
    def initialize io
      @io = io

    def write object, options = {}
      options.each do |k,v|
        @io.write "#{k}: #{v}\n"
      @io.write "data: #{JSON.dump(object)}\n\n"

    def close

We’ll place this file under lib/reloader/sse.rb and require it from the browser controller. In the controller, we’ll mix in ActionController::Live and try emitting some SSEs:

require 'reloader/sse'

class BrowserController < ApplicationController
  include ActionController::Live

  def index
    # SSE expects the `text/event-stream` content type
    response.headers['Content-Type'] = 'text/event-stream'

    sse = Reloader::SSE.new(response.stream)

      loop do
        sse.write({ :time => Time.now })
        sleep 1
    rescue IOError
      # When the client disconnects, we'll get an IOError on write

Next, update your routes.rb to point at the new controller:

Reloader::Application.routes.draw do
  get 'browser' => 'browser#index'

Fire up Puma in one shell:

$ puma
Puma 1.5.0 starting...
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://
Use Ctrl-C to stop

Then in another shell curl against the endpoint. You should see an event emitted every second. Here is my output after a few seconds:

$ curl -i http://localhost:9292/browser
HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache
X-UA-Compatible: IE=Edge
X-Request-Id: 76cfaa39-d23b-4eac-8337-f915410dc0de
X-Runtime: 0.430762
Transfer-Encoding: chunked

data: {"time":"2012-07-30T10:02:05-07:00"}

data: {"time":"2012-07-30T10:02:06-07:00"}

data: {"time":"2012-07-30T10:02:07-07:00"}

data: {"time":"2012-07-30T10:02:08-07:00"}

data: {"time":"2012-07-30T10:02:09-07:00"}

data: {"time":"2012-07-30T10:02:10-07:00"}


Next we should monitor the file system.

File System Monitoring

Now we’ll update the controller to emit an event every time a file under app/assets or app/views changes. Rather than a loop in our controller, we’ll use the FSEvent object:

require 'reloader/sse'

class BrowserController < ApplicationController
  include ActionController::Live

  def index
    # SSE expects the `text/event-stream` content type
    response.headers['Content-Type'] = 'text/event-stream'

    sse = Reloader::SSE.new(response.stream)

      directories = [
        File.join(Rails.root, 'app', 'assets'),
        File.join(Rails.root, 'app', 'views'),
      fsevent = FSEvent.new

      # Watch the above directories
      fsevent.watch(directories) do |dirs|
        # Send a message on the "refresh" channel on every update
        sse.write({ :dirs => dirs }, :event => 'refresh')

    rescue IOError
      # When the client disconnects, we'll get an IOError on write

The controller will send an SSE named “refresh” every time a file is modified. Start Puma in one shell, then curl in a second shell, and touch a file in the third shell, you will see an event.

Curl started in one shell:

$ curl -i http://localhost:9292/browser

Touch a file in another:

$ touch app/assets/javascripts/application.js

Now the curl shell should look like this:

$ curl -i http://localhost:9292/browser
HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache
X-UA-Compatible: IE=Edge
X-Request-Id: 98331d36-ef7c-4d15-ad99-331149fc589b
X-Runtime: 43.307765
Transfer-Encoding: chunked

event: refresh
data: {"dirs":["/Users/aaron/git/reloader/app/assets/javascripts/"]}

Every time a file is modified under the directories we’re watching, an SSE will be sent up to the browser.

Listening with JavaScript

Next let’s add the JavaScript that will actually refresh the page. I’m going to add this directly to app/assets/javascripts/application.js. The JavaScript we’ll add simply opens an SSE connection and listens for refresh events.

jQuery(document).ready(function() {
  setTimeout(function() {
    var source = new EventSource('/browser');
    source.addEventListener('refresh', function(e) {
  }, 1);

Whenever a refresh event happens, the browser will reload the current page.

Parallel Requests

We need to update the configuration in development to handle multiple requests at the same time. One request for the page we’re working on, and another request for the SSE controller. Add these lines to your config/environments/development.rb but please note that they may change in the future:

  config.preload_frameworks = true
  config.allow_concurrency = true

Next we’ll see everything work together.

Trying it out!

To see the automatic refreshes in action, let’s create a test controller and view. I just want to see the automatic refreshes happen, so I’ll use the scaffolding to generate a model, view, and controller:

$ ruby script/rails g scaffold user name:string
$ rake db:migrate

Now run Puma, and navigate to http://localhost:9292/users. If you watch the developer tools, you’ll see the browser connect to /browser but the request will never finish. That is what we want: the browser listening for events on that endpoint.

If you change any file under app/assets or app/views, a message should be sent to the browser, and the browser will refresh the page.


SSE Caveats / Features

SSEs will not work on IE (yet). If you want to use this with IE, you’ll have to find another way. SSEs will work on pretty much every other browser, including Mobile Safari.

Some webservers (notably Unicorn) cut off the request after a particular timeout. Be mindful of this when designing your application, and remember that SSE will automatically reconnect after a connection is lost.

Heroku will cut off your connections after 30 seconds. I had trouble getting the SSE to reconnect to a Heroku server, but I haven’t had time to investigate the issue.

Rails Live Streaming Caveats

Mixing the Live Streaming module in to your controller will enable every action in that controller to have a Live Streaming object. In order to make this feature work, I had to work around the Rack API and I did this by executing Live Stream actions in a new thread. This shouldn’t be a problem for most people, but I thought it would be good for people to know.

Headers cannot be written after the first call to write or close on the stream. You will get an exception if you attempt to write to the headers after those calls are made. This is because when you call write on the stream, the server will send all the headers up to the client, so writing new headers after that point is useless and probably a bug in your code.

Always make sure to close your Live Stream IO objects. If you don’t, it might mean that a socket will sit open forever.


I thought streaming was already introduced in Rails 3.2. How is this different?

Yes, streaming templates were added to Rails 3.2. The main difference between Live Streaming and Streaming Templates is that with Streaming Template, the application developer could not choose what data was sent to the client and when. Live Streaming gives the application developer full control of what data is sent to the client and when.

Final Thoughts

I’m very excited about this feature of Rails 4. In my opinion, it is one of the most important new features. I’ve been interested in streaming data from Rails for a long time. We can use this feature to reduce latency and deliver data more quickly to clients on slow connections (e.g. cell phones), for infinite streams like chatrooms, or for cool productivity hacks like this article shows.

I hope you enjoyed this article! I think for the next demo of Live Streams, I would like to show how to reduce latency when sending JSON streams to the client. That might be fun.


read more »