2012-07-09 @ 16:51

Crouching Tiger, Hidden Salami

I’ve been learning how to cure meat, and I thought I should share my setup. I’m currently on my third batch of meat (second time curing salami). Curing meat requires control of the environment, external and internal to the meat. This post is about the hardware I use for controlling the environment outside the meat. Specifically:

  • Temperature
  • Humidity
  • Wind speed

Temperature

I’m using a wine fridge, the Vinotemp VT-27 TS:

IMG_0008

I chose this fridge because it has a temperature range of 39ºF to 65ºF, which is an ideal range for meat curing.

Most of the curing time is spent around 60ºF, but during the fermentation stage, you must keep the temperature at 70ºF. In order to maintain the higher temperature, (until recently) I used a Freezer Temperature Controller. I was able to figure out the wiring for the temperature relay coil that controls the fridge pump, so I’m currently using a TI Launchpad and a Humidity and temperature sensor to control the fridge directly:

IMG_0046

Controlling the fridge directly is not necessary. The freezer temperature controller works great. I just wanted to control the fridge with a micro controller because I am a huge nerd!

Humidity

For the humidity, I am using a Crane Drop humidifier. I chose this humidifier because it’s ultrasonic, so supposedly the water molecules in the air are smaller (or so I’m told). More importantly, the humid air it produces is not heated, so it will not impact the temperature of your box as much.

To get the humid air in to the cooler, I drilled a hole in the back:

Click here to see a zoomed in version of the hole. I bought some tubing and hooked the humidifier in to the hole. This is a top view, showing the tubing hooked in to the fridge:

IMG_0027

I didn’t want to constantly monitor and adjust the humidifier, so I also bought a Dayton Humidifier Controller. The controller sits inside the fridge. I have an extension cord running in to the fridge, the sensor plugged in to that, then the humidifier plugged in to the sensor:

Wind speed

The final piece is wind speed. On my first batch, I didn’t understand the importance of wind speed until my sausage got slimy and gross. Make sure you have fans in your curing box! I have two fans plugged in to the extension cord that is used for the humidity sensor:

IMG_0035

Miscellaneous

The wire racks I have in the box are not stainless, so it’s important not to let the salami touch the metal (otherwise you get a metallic flavor). So I use a bunch of clothespins to hold on to the string that ties off my salami.

So far, I’ve found the best place to put the fans is above the meat pointing down. I found I couldn’t get good air circulation in other places.

This fridge was not meant to operate with such high humidity! When water condenses in your fridge, it drips to a tray outside the fridge where it will evaporate. But since this box operates with 70% to 90% humidity, there is way too much condensate for the drip tray. I had to remove the regular drip tray and run a pipe down to a container to catch the water:

Recap

My hardware list is:

The fridge cost me about $200. The humidifier and controllers ran about $150 together. I estimate the total cost was about $400 when I was done. The freezer controller can perform in the correct temperature range for curing meat, so one way to save money on this project would be to buy a used fridge from craigslist or something.

Another possibility is to buy a Temperature and humidity sensor. It requires a bit more assembly, but is cheaper than buying both the freezer controller and the humidity controller.

With a microcontroller, there are even more possible solutions. It just depends on how much you’re willing to spend, and how much you want to assemble yourself.

Here is my full setup:

IMG_0030

I hope you’ve enjoyed this post! I think my next post will be “Hacking your sausage box” (how to hack the Vinotemp wine cooler), followed by “Real Time Sausage Monitoring” (publishing and monitoring sausage info in real time).

Cat:

IMG_0026

<3<3<3<3<3

read more »

2012-07-30 @ 11:00

Is it live?

TL;DR Rails 4.0 will allow you to stream arbitrary data at arbitrary intervals with Live Streaming.

Besides enabling multi-threading by default, one of the things I really wanted for Rails 4.0 is the ability to stream data to the client. I want the ability to treat the response object as an I/O object, and have the data I write immediately available to the client. Esentially, the ability to deliver whatever data I want, whenever I want.

Last night I merged a patch to master that allows us to do exactly that: send arbitrary data in real time to the client. In this article, I would like to show off the feature by developing a small application that automatically refreshes the page when a file is written. I’ll be working against edge Rails, specifically against this commit (hopefully then people in the future will notice if / when this article is out of date!)

Here is a video of the final product we’ll build in this article:

Response Streaming

The first thing I added was a “stream” object to the response. This object is where where data will be buffered until it is sent to the client. The stream object is meant to quack like an IO object. For example:

1
2
3
4
5
6
7
8
class MyController < ActionController::Base
  def index
    100.times {
      response.stream.write "hello world\n"
    }
    response.stream.close
  end
end

In order to maintain backwards compatibility, the above code will work, but it will not stream data to the client. It will buffer the data until the response is completed, then send everything at the same time.

Live Streaming

To make live streaming actually work, I added a module called ActionDispatch::Live. If you mix this module in to your controller, all actions in that controller can stream data to the client in real time. We can make the above MyController example live stream by mixing in the module like so:

1
2
3
4
5
6
7
8
9
10
class MyController < ActionController::Base
  include ActionController::Live

  def index
    100.times {
      response.stream.write "hello world\n"
    }
    response.stream.close
  end
end

The code in our action stays exactly the same, but this time the data will be streamed to the client as every time we call the write method.

Webservers

Before we start on our little example project, we should talk a bit about web servers. By default, script/rails server uses WEBrick. The Rack adapter for WEBrick buffers all output in a way we cannot bypass, so developing this example with script/rails server will not work.

We could use Unicorn, but it is meant for fast responses. Unicorn will kill our connection after 30 seconds. The protocol we’re going to use actually makes this behavior irrelevant, but it’s a bit annoying to see the logs.

For this project, I think the best webserver would be either Rainbows or Puma. I’ve been playing with Puma a lot lately, so I’ll use it in this example.

Our application

We’re going to build an application that automatically reloads the page whenever a file is saved. You can find the final repository here.

Technology

For this project we’re going to use edge Rails and Live Streaming, along with a bit of JavaScript and Server-Sent Events. To detect file system changes, we’re going to use the rb-fsevent gem. I think this gem only works on OS X, but it should be easy to translate this project to Linux or Windows given the right library.

Server-Sent Events

If you’ve never heard of Server-Sent Events (from here on I’ll call them SSEs), it’s a feature of HTML5 that allows long polling, but is built in to the browser. Basically, the browser keeps a connection open to the server, and fires an event in JavaScript every time the server sends data. An example event looks like this:

id: 12345\n
event: some_channel\n
data: {"hello":"world"}\n\n

Messages a delimited by two newlines. The data field is the event’s payload. In this example, I’ve just embedded some JSON data in the payload. The event field is the name of the event to fire in JavaScript. The id field should be a unique id of the message. SSE does automatic reconnection; if the connection is lost, the browser will automatically try to reconnect. If an id has been provided with your messages, when the browser attempts to reconnect, it will send a header (Last-Event-ID) to the server allowing you to pick up where you left off.

We’re going to build a controller that emits SSEs and tells the browser to refresh the page.

Getting Started

The first thing we’ll do is generate a new Rails project from the Rails git repository (I keep all my git repos in ~/git):

$ cd ~/git/rails
$ ruby railties/bin/rails new ~/git/reloader --dev
$ cd ~/git/reloader

Update the Gemfile to include puma and rb-fsevent and re-bundle:

1
2
3
4
5
6
7
8
9
10
11
12
13
diff --git a/Gemfile b/Gemfile
index 9e075a8..51ce01c 100644
--- a/Gemfile
+++ b/Gemfile
@@ -6,6 +6,8 @@ gem 'arel',      github: 'rails/arel'
 gem 'active_record_deprecated_finders', github: 'rails/active_record_deprecated_finders'
 
 gem 'sqlite3'
+gem 'puma'
+gem 'rb-fsevent'
 
 # Gems used only for assets and not required
 # in production environments by default.

Then we’ll generate a controller for emitting SSE messages to the browser:

$ ruby script/rails g controller browser

Moving on!

Generating SSEs

I’d like an object that knows how to format messages as SSE and emits those messages to the live stream. To do this, we’ll write a small class that decorates the output stream and knows how to dump objects as SSEs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
require 'json'

module Reloader
  class SSE
    def initialize io
      @io = io
    end

    def write object, options = {}
      options.each do |k,v|
        @io.write "#{k}: #{v}\n"
      end
      @io.write "data: #{JSON.dump(object)}\n\n"
    end

    def close
      @io.close
    end
  end
end

We’ll place this file under lib/reloader/sse.rb and require it from the browser controller. In the controller, we’ll mix in ActionController::Live and try emitting some SSEs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
require 'reloader/sse'

class BrowserController < ApplicationController
  include ActionController::Live

  def index
    # SSE expects the `text/event-stream` content type
    response.headers['Content-Type'] = 'text/event-stream'

    sse = Reloader::SSE.new(response.stream)

    begin
      loop do
        sse.write({ :time => Time.now })
        sleep 1
      end
    rescue IOError
      # When the client disconnects, we'll get an IOError on write
    ensure
      sse.close
    end
  end
end

Next, update your routes.rb to point at the new controller:

1
2
3
Reloader::Application.routes.draw do
  get 'browser' => 'browser#index'
end

Fire up Puma in one shell:

$ puma
Puma 1.5.0 starting...
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://0.0.0.0:9292
Use Ctrl-C to stop

Then in another shell curl against the endpoint. You should see an event emitted every second. Here is my output after a few seconds:

$ curl -i http://localhost:9292/browser
HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache
X-UA-Compatible: IE=Edge
X-Request-Id: 76cfaa39-d23b-4eac-8337-f915410dc0de
X-Runtime: 0.430762
Transfer-Encoding: chunked

data: {"time":"2012-07-30T10:02:05-07:00"}

data: {"time":"2012-07-30T10:02:06-07:00"}

data: {"time":"2012-07-30T10:02:07-07:00"}

data: {"time":"2012-07-30T10:02:08-07:00"}

data: {"time":"2012-07-30T10:02:09-07:00"}

data: {"time":"2012-07-30T10:02:10-07:00"}

^C
$

Next we should monitor the file system.

File System Monitoring

Now we’ll update the controller to emit an event every time a file under app/assets or app/views changes. Rather than a loop in our controller, we’ll use the FSEvent object:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
require 'reloader/sse'

class BrowserController < ApplicationController
  include ActionController::Live

  def index
    # SSE expects the `text/event-stream` content type
    response.headers['Content-Type'] = 'text/event-stream'

    sse = Reloader::SSE.new(response.stream)

    begin
      directories = [
        File.join(Rails.root, 'app', 'assets'),
        File.join(Rails.root, 'app', 'views'),
      ]
      fsevent = FSEvent.new

      # Watch the above directories
      fsevent.watch(directories) do |dirs|
        # Send a message on the "refresh" channel on every update
        sse.write({ :dirs => dirs }, :event => 'refresh')
      end
      fsevent.run

    rescue IOError
      # When the client disconnects, we'll get an IOError on write
    ensure
      sse.close
    end
  end
end

The controller will send an SSE named “refresh” every time a file is modified. Start Puma in one shell, then curl in a second shell, and touch a file in the third shell, you will see an event.

Curl started in one shell:

$ curl -i http://localhost:9292/browser

Touch a file in another:

$ touch app/assets/javascripts/application.js

Now the curl shell should look like this:

$ curl -i http://localhost:9292/browser
HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache
X-UA-Compatible: IE=Edge
X-Request-Id: 98331d36-ef7c-4d15-ad99-331149fc589b
X-Runtime: 43.307765
Transfer-Encoding: chunked

event: refresh
data: {"dirs":["/Users/aaron/git/reloader/app/assets/javascripts/"]}

Every time a file is modified under the directories we’re watching, an SSE will be sent up to the browser.

Listening with JavaScript

Next let’s add the JavaScript that will actually refresh the page. I’m going to add this directly to app/assets/javascripts/application.js. The JavaScript we’ll add simply opens an SSE connection and listens for refresh events.

1
2
3
4
5
6
7
8
jQuery(document).ready(function() {
  setTimeout(function() {
    var source = new EventSource('/browser');
    source.addEventListener('refresh', function(e) {
      window.location.reload();
    });
  }, 1);
});

Whenever a refresh event happens, the browser will reload the current page.

Parallel Requests

We need to update the configuration in development to handle multiple requests at the same time. One request for the page we’re working on, and another request for the SSE controller. Add these lines to your config/environments/development.rb but please note that they may change in the future:

1
2
  config.preload_frameworks = true
  config.allow_concurrency = true

Next we’ll see everything work together.

Trying it out!

To see the automatic refreshes in action, let’s create a test controller and view. I just want to see the automatic refreshes happen, so I’ll use the scaffolding to generate a model, view, and controller:

$ ruby script/rails g scaffold user name:string
$ rake db:migrate

Now run Puma, and navigate to http://localhost:9292/users. If you watch the developer tools, you’ll see the browser connect to /browser but the request will never finish. That is what we want: the browser listening for events on that endpoint.

If you change any file under app/assets or app/views, a message should be sent to the browser, and the browser will refresh the page.

YAY!

SSE Caveats / Features

SSEs will not work on IE (yet). If you want to use this with IE, you’ll have to find another way. SSEs will work on pretty much every other browser, including Mobile Safari.

Some webservers (notably Unicorn) cut off the request after a particular timeout. Be mindful of this when designing your application, and remember that SSE will automatically reconnect after a connection is lost.

Heroku will cut off your connections after 30 seconds. I had trouble getting the SSE to reconnect to a Heroku server, but I haven’t had time to investigate the issue.

Rails Live Streaming Caveats

Mixing the Live Streaming module in to your controller will enable every action in that controller to have a Live Streaming object. In order to make this feature work, I had to work around the Rack API and I did this by executing Live Stream actions in a new thread. This shouldn’t be a problem for most people, but I thought it would be good for people to know.

Headers cannot be written after the first call to write or close on the stream. You will get an exception if you attempt to write to the headers after those calls are made. This is because when you call write on the stream, the server will send all the headers up to the client, so writing new headers after that point is useless and probably a bug in your code.

Always make sure to close your Live Stream IO objects. If you don’t, it might mean that a socket will sit open forever.

WUT?

I thought streaming was already introduced in Rails 3.2. How is this different?

Yes, streaming templates were added to Rails 3.2. The main difference between Live Streaming and Streaming Templates is that with Streaming Template, the application developer could not choose what data was sent to the client and when. Live Streaming gives the application developer full control of what data is sent to the client and when.

Final Thoughts

I’m very excited about this feature of Rails 4. In my opinion, it is one of the most important new features. I’ve been interested in streaming data from Rails for a long time. We can use this feature to reduce latency and deliver data more quickly to clients on slow connections (e.g. cell phones), for infinite streams like chatrooms, or for cool productivity hacks like this article shows.

I hope you enjoyed this article! I think for the next demo of Live Streams, I would like to show how to reduce latency when sending JSON streams to the client. That might be fun.

<3<3<3

read more »