Tuesday, April 26, 2011

Book Review: Pragmatic Thinking and Learning: Refactor Your Wetware

The Pragmatic Programmers have consistently put out good material for years, and whether their own or other authors', the quality is almost always high. I just finished Pragmatic Thinking and Learning and, reading numerous other reviews, they are all very positive. The community liked it, and that's usually a good sign (we'll see what happens when not everyone is happy).

Important Relationships


The first major concept introduced, and repeatedly referenced, is that the relationships between objects are more interesting than the objects themselves. Discrete "things" - whether facts, concepts, or people, exist not in a vacuum but with other similar and different "things." Emergent behaviors and ideas spring up with the interactions between everything in their specific system. These specific contexts generate yet more powerful versions of the what is interacting in them.

A non-programming example is opening a locked door. The situation of why one wants to open the door matters in how one will attempt to open the door. Whether there is a baby in a burning house with a locked front door or there are documents we want to steal in a hotel room under security will influence the method used - an axe or lock picking tools. The why strongly influences the how and context matters.

Got Skills?


We then turn to the Dreyfus Model for Skill Acquisition and see how people learn new topics. Starting as a novice, one who needs context-free rules, like recipes, because he doesn't know enough to establish any big picture of the problem domain, and ending as an expert who understands the system he works in so well he works by intuition and pattern matching, there is a trend from stricter rules to practically none. The greater the expertise, the more we understand what is possible and what we should consider, both giving us faster access to what is possible and a smaller, and more easily understood, problem.

Impr-you-vement


The methods for improving skills is simple but not easy. Only through deliberate practice can we get to the highest levels of understanding, and this involves a lot of well-structured, hard work. We need a system to work on well-defined tasks that are appropriately challenging, continuous feedback from those tasks to keep us working on relevant ideas, and lots of repetition to strengthen the ideas and make them part of our long-term memories.

It also helps to find the relationships between what we're learning, looking at the big picture to understand the overall meaning and not be bothered with the minute details (at least not at the beginning). Analogies to previous knowledge further create relationships and help establish the context of the new ideas. For example, when reading a non-fiction book, we should first scan the table of contents and chapter summaries to get an overview of what we're going to read to establish the most ideas to focus on. While reading we should summarize the concepts, create metaphors with the material, and, when finished, expand the notes with a reread and discuss the ideas with colleagues. These tactics will cement the knowledge in the brain better than just a cursory skimming of the book.

Nothing is Perfect, However


One weakness of the book is the reliance on the left/right brain dichotomy, an idea that isn't totally backed by science. This criticism goes as deep as one wants to rely on the truth of this assertion, but if we think of it as a metaphor, like the main brain metaphor of a computer (another incomplete metaphor), and always remember that it's just a method to point in the general direction of understanding, and that we shouldn't understand these concepts to be almost literally accurate, we can get away with the comparisons. Regardless, I'm sure there is value in the takeaways such as creating analogies between disparate topics, drawing out ideas, talking out loud, and using other mixed media.

So What?


Pragmatic Learning is an excellent book, faults and all, and I did take away some concrete plans.

  • I've created a wiki for knowledge dumps and connecting ideas
  • I'm researching mind mapping software (either this one or this one)
  • I'm writing down my ideas in a notebook, along with notes on books I'm reading, which I then transfer to an online medium

There are dozens of other specific tasks to do, and I plan to come back to the book and implement more as needed. It's certainly not necessary to try every single one, but it is reasonable, if the motivation is there (and who wouldn't want to learn how to learn better?), the results will follow.

Sunday, April 17, 2011

Testing content_tag in Rails 2.3.5 with RSpec

I'm working on a codebase that's still on Rails 2.3.5, and recently I added a group of radio buttons for users to estimate their expertise level when answering a question. I wanted to play with content_tag() more than I have, so here is the view helper:

module AnswersHelper
  # Creates the markup for displaying the expertise choices for an answer.
  def expertise_choices(answer)
    content_tag(:div, :id => 'choices') do
      content_tag(:span, :class => 'clarification') { 'Not at all' } +
      collect_expertise_choices(answer) +
      content_tag(:span, :class => 'clarification') { 'Very Much So' }
    end
  end

  private

  # Creates 5 radio buttons and selects the one with the value of the answer's
  # expertise value if it exists.
  def collect_expertise_choices(answer)
    (1..5).collect do |i|
      checked = (i == answer.expertise) ? { :checked => 'checked' } : {}
      radio_button('answer', 'expertise', i, checked)
    end.to_s
  end
end

Nothing difficult to get through, but some small notes of interest:

content_tag() can nest within other content_tag() calls, and you can append markup to each other to get everything you need to display properly. Also, don't forget to call to_s() to get a string, not an array, of the radio buttons.

Here is the partial that calls the helper:

#expertise
  Are you an expert on this topic?
  %br
  #choices
    %span.clarification Not at all
    = expertise_choices(answer)
    %span.clarification Very Much So

Finally, here are the accompanying tests:

require 'spec_helper'
include AnswersHelper

describe AnswersHelper do
  describe "#expertise_choices" do
    it "should display five radio buttons" do
      answer = mock_model(Answer, :expertise => nil)
      results = expertise_choices(answer)
      (1..5).each do |i|
        results.should have_tag('input', :id => "answer_expertise_#{i}", :type => 'radio', :value => i)
      end
    end

    it "should have a #choices div" do
      answer = mock_model(Answer, :expertise => nil)
      results = expertise_choices(answer)
      results.should have_tag('div#choices')
    end

    it "should have two .clarification spans" do
      answer = mock_model(Answer, :expertise => nil)
      results = expertise_choices(answer)
      results.should have_tag('span.clarification', :minimum => 2)
    end

    context "when editing" do
      it "should check the existing choice" do
        answer = mock_model(Answer, :expertise => 4)
        results = expertise_choices(answer)
        results.should have_tag('input[checked="checked"]', :type => 'radio', :value => 4)
      end
    end
  end
end

Again, nothing difficult to understand, but you can see how cool and powerful have_tag() is. Unfortunately, when we upgrade to RSpec 2, we'll need to change these tests to use webrat's have_selector(). But for now, let's just enjoy the time we have together, okay?

Wednesday, April 6, 2011

Composite Pattern FTW

Background

A post by Paul Graham I recently found resonated with what I've been doing at work recently. In his post, "Taste for Makers," PG posits that beauty is not wholly subjective and that good design is beautiful. Among others, good design:
  • is simple
  • solves the right problem
  • is suggestive
  • looks easy
  • uses symmetry
  • is redesign
  • can copy
  • is often quite strange
  • happens in chunks

I'd like to focus on a few of these descriptions and use an example I've recently done.

In his fantastic book, Design Patterns in Ruby, Russ Olsen describes one tenant of the GOF book to "prefer composition over inheritance." Inheritance creates tighter coupling between classes, since the children of the base class need to know about the internals of the base, even though the coupling is very specific to the implementation and (should be) well understood. Composition, however, changes the relationship between objects. An object no longer is another type of object but has the functionality of another object (is-a vs. has-a). This relationship increases the encapsulation of the composite object by providing an interface to the composed object instead of exposing the underlying details of a base class.

Slices and Dices!

Now I know there is a tendency to think of design patterns as a silver bullet, but bear with me. The situation is fine when the inheritance tree is simple and the functionality basic. The complexity grows as the tree grows and as more functionality is required. Soon, you're not quite sure if it should inherit Foo which inherits from Bar, or if you should just inherit from Baz way up near the base. You'll have to dig into the classes to find out which one is closest to what you want and hope it makes the most sense to place the new class wherever you end up placing it. However, using the Composite Pattern gives us much more flexibility for creating new classes and giving them abilities.

An Example

There is a system that asks users different types of questions. One type asks when an event will happen (DateQuestion), one type asks the numerical results of an event (NumberQuestion), and one asks which event will happen given a set of choices (ChoiceQuestion). We have a base Question that each inherits from, and since dates can be represented as numbers, DateQuestion will inherit from NumberQuestion. These questions allow answers, comments, access control lists, and have a specific work flow (create, activate, suspend, close, etc.).

Later on, the system needs to support a few more types of questions: a numeric range (NumberRangeQuestion), a date range (DateRangeQuestion), a yes/no-only (YesNoQuestion)...you get the point. We need to figure out where these new types go in the inheritance tree - whether one is a child of a DateQuestion (itself a child of NumberQuestion), or if it's just a child of NumberQuestion, or maybe it's its own type and only inherits from the base Question type. We start to bump into complexity issues, that is, unnecessary complexity.

I'll Take a Little of This...

Let's approach this problem from a different angle. Given our original Question types, we can make them all inherit from a base Question class and then give them abilities as needed. So now our classes look like this:

class Question
  include Commentable
  include AccessListControllable
  include Workflowable
end

class NumberQuestion < Question
  include Numerical
end

class DateQuestion < Question
  include Numerical
  include Dateable
end

class ChoiceQuestion < Question
  include Choiceable
end

NumberQuestion and DateQuestion are numerical, that is, they have whatever functionality they need to do what numerical objects need to do. The DateQuestion is also dateable, so it has additional properties needed for a dateable object, while NumberQuestion, not needing them, doesn't have those abilities. So when we need additional Question types, we can choose which abilities they need. A DateRangeQuestion? It's dateable, numerical, and it's got its own class-specific functionality as well.

There are some trade-offs. Some modules may not have all the functionality an object needs, and there is a potential for similar code needed to provide slightly different abilities. There can also be unneeded functionality in a module that an object will never need. These problems aren't specific to the Composite design pattern, as they can occur with regular inheritance as well.

Some Clarity

We've refactored our code to use a design pattern to organize our code a little better to make our application more maintainable and extendable, both good things, and the process was relatively painless. Since the functionality never changed, just the organization, if the tests pass, we can feel confident that our models still work how we want.