2009-06-12 @ 17:24

Easy Markup Validation

I wanted a test helper that would assert that my XHTML was valid XHTML. So I wrote one and called it “markup_validity”. You can use it too, and I will show you how.

First, install the gem:

  $ sudo gem install markup_validity

Then, use it in your tests:

1
2
3
4
5
6
7
8
9
require 'test/unit'
require 'rubygems'
require 'markup_validity'

class ValidHTML < Test::Unit::TestCase
  def test_i_can_has_valid_xhtml
    assert_xhtml_transitional xhtml_document
  end
end

Oh. You use RSpec? It supports that too:

1
2
3
4
5
6
7
8
require 'rubygems'
require 'markup_validity'

describe "my XHTML document" do
  it "can has transitional xhtml" do
    xhtml_document.should be_xhtml_transitional
  end
end

Debugging invalid markup can be a pain. MarkupValidity tries to give you helpful errors to make your life easier. Say you have an invalid piece of XHTML like this:

<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
  </head>
  <body>
    <p>
      <p>
        Hello
      </p>
    </p>
  </body>
</html>

The error output from MarkupValidity will be this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.Error on line: 2:
Element 'head': Missing child element(s). Expected is one of ( script, style, meta, link, object, isindex, title, base ).

1: <html xmlns="http://www.w3.org/1999/xhtml">
2:   <head>
3:   </head>
4:   <body>
5:     <p>

Error on line: 6:
Element 'p': This element is not expected. Expected is one of ( a, br, span, bdo, object, applet, img, map, iframe, tt ).

5:     <p>
6:       <p>
7:         Hello
8:       </p>
9:     </p>

MarkupValidity provides a few assertions for test/unit:

  • assert_xhtml_transitional(xhtml) for asserting valid transitional XHTML
  • assert_xhtml_strict(xhtml) for asserting valid strict XHTML
  • assert_schema(schema, xml) for asserting that your xml validates against a schema
  • assert_xhtml which is an alias for assert_xhtml_transitional

The methods provided for RSpec are quite similar:

  • be_xhtml_transitional for asserting valid transitional XHTML
  • be_xhtml_strict for asserting valid strict XHTML
  • be_valid_with_schema(schema) for asserting that your xml validates against a schema
  • be_xhtml which is an alias for be_xhtml_transitional

MarkupValidity even works well with rails. Here is an example rails controller test:

require 'test_helper'
require 'markup_validity'

class AwesomeControllerTest < ActionController::TestCase
  test "valid markup" do
    get :new
    assert_xhtml_transitional @response.body
  end
end
read more »

2009-06-26 @ 08:48

String Encoding in Ruby 1.9 C extensions

One of the challenges of developing nokogiri has been dealing with String encodings in C. I would like to present one of the problems encountered, along with a solution. I will be using RubyInline in the examples below, but the C code presented should be easy to port to your own C extensions.

Examining the Encoding

If you’ve developed a C extension before, you’re probably familiar with rb_str_new2 and friends. They all basically turn a char * in to a string VALUE. But in Ruby 1.9, what is the encoding of the returned Ruby String? Well, using RubyInline, it’s easy enough to see by calling the “encoding” method. Here is a script that works in Ruby 1.8 and Ruby 1.9:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
require 'rubygems'
require 'inline'

class HelloWorld
  inline do |builder|
    builder.c '
      static VALUE test() {
        return rb_str_new2("Hello world");
      }
    '
  end
end

string = HelloWorld.new.test

if string.respond_to? :encoding
  puts string.encoding
else
  puts string
end

In Ruby 1.8, this outputs the string, and in 1.9 we see the encoding. In 1.9, the encoding returned is ASCII-8BIT. Now ASCII-8BIT may be the encoding that you want, but then again, it may not. In Nokogiri, the strings coming from libxml2 are already encoded according to the document declaration. So strings returned must be marked with the appropriate encoding. How can we update the encoding?

Changing the Encoding

In Ruby 1.9, we get a few new functions specifically for dealing with encoding. These functions are defined in <ruby/encoding.h>. We’re going to be dealing with two of them: rb_enc_find_index and rb_enc_associate_index.

The first function, rb_enc_find_index, given a char * will look up the index of your encoding. The function takes a string like “UTF-8” and returns a magic index number for that encoding.

The second function, rb_enc_associate_index, will associate a string held in a VALUE with the encoding index returned from the first function.

Armed with this knowledge, we can modify our original program to return a string encoded with UTF-8. The only modifications are to include <ruby/encoding.h>, get the index for the desired encoding, then associate the VALUE with the returned index:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
require 'rubygems'
require 'inline'

class HelloWorld
  inline do |builder|
    builder.include "<ruby/encoding.h>"

    builder.c '
      static VALUE test() {
        VALUE string = rb_str_new2("Hello World");
        int enc = rb_enc_find_index("UTF-8");
        rb_enc_associate_index(string, enc);
        return string;
      }
    '
  end
end

string = HelloWorld.new.test

if string.respond_to? :encoding
  puts string.encoding
else
  puts string
end

Great! When this is run under Ruby 1.9, the encoding returned is UTF-8. Unfortunately, this example is now specific for Ruby 1.9. Ruby 1.8 does not ship with the correct header files, and definitely does not include the functions for looking up and assigning encoding. This code will just not work under Ruby 1.8. Luckily, this code can be refactored to work under either version of Ruby.

Refactoring for 1.8 Support

Both Ruby 1.8 and 1.9 provide a <ruby.h> header file. The Ruby 1.9 version of that file defines a constant HAVE_RUBY_ENCODING_H that lets us determine whether the proper header file exists. Our final attempt tests for the encoding constant, then defines a macro to wrap rb_str_new2. If the version of Ruby we compile against has encoding support, the macro can add the encoding to the string, otherwise, it just ignores the encoding:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
require 'rubygems'
require 'inline'

class HelloWorld
  inline do |builder|

    builder.prefix <<-eoc
#include <ruby.h>

#ifdef HAVE_RUBY_ENCODING_H

#include <ruby/encoding.h>

#define ENCODED_STR_NEW2(str, encoding) \
  ({ \
    VALUE _string = rb_str_new2((const char *)str); \
    int _enc = rb_enc_find_index(encoding); \
    rb_enc_associate_index(_string, _enc); \
    _string; \
  })

#else

#define ENCODED_STR_NEW2(str, encoding) \
  rb_str_new2((const char *)str)

#endif
    eoc

    builder.c '
      static VALUE test() {
        return ENCODED_STR_NEW2("Hello world", "UTF-8");
      }
    '
  end
end

string = HelloWorld.new.test

if string.respond_to? :encoding
  puts string.encoding
else
  puts string
end

In 1.8, the macro just returns the new string. In 1.9, the macro returns the string and additionally sets the encoding. Now if we use this macro wherever we create new strings, we’ll be working well with 1.8 and 1.9!

Final Notes

This example was slightly simplified. Since the encoding index is determined at runtime, there could be problems. If rb_enc_find_index cannot find the requested encoding, it simply returns a -1. The macro should handle that case.

Also, if you’re playing along at home, remember to save the file between running it with 1.8 and 1.9. RubyInline examines the mtime of the ruby file, and will only recompile when the rb file has been written to. That means if you run it with 1.8, then immediately run again with 1.9, it won’t recompile it for 1.9. I suppose I should send in a patch. ;-)

One last thing… There may be better ways to do this. I needed to determine the encoding at runtime because XML files declare their encoding scheme. If you parse an XML file that declares it’s encoding as EUC-JP, it would make sense that the strings you pull our are encoded in EUC-JP, right? If you know that you’re always going to be returning UTF-8 strings from your C extensions, it could be a different story. Either way, using macros and checking for constants should make sure your code works with 1.8 or 1.9.

read more »