Development Blog

 Sunday, October 26, 2008

myImage (1)I recently came across Balsamiq, a very well done application for quickly mocking up UI prototypes. With simple drag and drops you can quickly create some pretty slick prototypes. The prototypes have a sketch like look which allows the viewer to use more of their imagination, something I think is a subtle, yet powerful advantage.

mockups_fpaIt's been a while since I've been so impressed with an app right out of the box... especially a Flash app. Just about everything works as I'd expect, right down to common keyboard shortcuts. It's got good organization in it's "ribbon" and so far it's had just about every type of thing I need. It looks like it has JIRA and Confluence support, though that's a bit on the pricey side. The Desktop version is reasonably priced though, and the web version seems to be free to use if you can put up with a nag every 5 minutes.

Nothing beats a whiteboard if you're all in the same office, but I'd say this is a close second. Certainly better than Visio :)

by Aaron on Sunday, October 26, 2008 5:12:55 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  |  Trackback
 Wednesday, October 22, 2008

Assembla was annoying us for a variety of reasons so we moved Machine to github.

git clone git://github.com/machine/machine.git

The astute may notice a few other repositories in the machine account... Jacob's been busy. We'll announce those projects in due time...

by Aaron on Wednesday, October 22, 2008 11:57:32 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback
 Sunday, October 19, 2008

Does ReSharper want to make your specs look like this?

Superfluous privates, indentation and warnings

But you want your specs to look like this?

clean text with less noise

Just follow these easy steps:

  1. Go to Resharper>Options

  2. Go to Languages>C#>Formatting Style>Other

  3. Uncheck Modifiers>Use explicit private modifier

  4. Uncheck Other>Indent anonymous method body and hit OK

  5. Go to your project properties>Build and suppress warning 169

  6. Enjoy!

bdd | mspec | resharper
by Aaron on Sunday, October 19, 2008 2:18:25 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback

Unlike vanilla TDD, the artifacts produced by BDD can and should be read by more than just developers. Most of us who practice TDD name our tests more or less like this:

MessageBoardControllerTests.Index_WithTenMessages_ReturnsFiveMostRecentFromRepository()

Shifting into Context/Specification style testing, one may be tempted to write specs like this:

MessageBoardController, when invoking index action when there are ten messages, 
  should return five most recent messages from the repository

The problem with this spec is subtle but important. You often want these specs to be readable and understandable by a normal person, someone in your business that can provide you feedback on your specs. Using words like "invoking", "index", "action" and "repository" are clear indicators that your audience is another developer. You should use the time writing specs to speak in the language of the business and to clarify your ubiquitous language. Here's how I'd rewrite this:

Message Board, when viewed
  should show only the five most recent messages

Again, the difference is subtle, but notice how I could show this to anyone in the company and they would understand exactly what is happening.

There are times for developer speak in specs I believe. If you are specing an API to be consumed by other developers I think it's ok to use words like "throw" and "return" because that is what the developers care about when integrating with an API.

Most of the time however, especially when writing the more UI/System Behavior level specs, you should consider who your audience is and try to speak like them. The code itself will provide the detail a developer needs to understand it.

As an aside, this is one of the many reasons I prefer the Context/Specification style to Given/When/Then style of BDD. Because people already don't speak in Given/When/Then prose in real life, it makes it even more difficult to write your specs for the intended audience. It also leads you to use magic numbers and other magic state in your prose rather than formalizing business concepts and improving your ubiquitous language.

bdd
by Aaron on Sunday, October 19, 2008 9:24:03 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback

Jacob and I just got back from Austin, TX where we were fortunate enough to attend the week long, lengthily titled Advanced Distributed Systems Design using SOA & DDD with Udi Dahan, The Software Simplist.

Awesome. Just awesome. We’d been meaning to delve into messaging at Eleutian after multiple discussions with and blog posts from Greg Young and Udi Dahan in the past. We weren’t entirely sure where to start, how to start, what tools to use, how to use them, etc. Being able to sit in a room with Udi for an entire week while he described exactly how, why and what he does to tackle a massive enterprise system was invaluable to say the least.

We now have a much better direction and, more importantly, have the confidence we need to start introducing these powerful concepts into production at Eleutian.

If Udi’s ever in your area giving this course and you’ve got a company that cares about their scalability, reliability and maintainability enough to see the value in such an offering, I’d strongly advise giving it a go.

by Aaron on Sunday, October 19, 2008 7:52:18 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback

Steve Sanderson is apparently writing two Apress books on ASP.NET MVC. While doing so, he’s been digging deep into the framework and inventing/discovering some pretty amazing things. My favorites thus far are:

Partial Requests in ASP.NET MVC

Partial Output Caching in ASP.NET MVC

I’m yet to use it seriously, but I’m pretty sure this is a big part of what I’ve been looking for when it comes to components in MVC. Definitely going to keep an eye on his blog.

by Aaron on Sunday, October 19, 2008 7:41:01 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback
 Tuesday, September 02, 2008

It's been a while, but we've gotten several new things into Machine.Specifications (MSpec). I'm excited to finally release them for everyone to start playing with. You can grab the bits here.

Let's talk about what's new though. Here's an example of a new context/spec:

  [Concern("Console runner")]
  public class when_specifying_a_missing_assembly_on_the_command_line
  {
    Establish context = ()=>
    {
      console = new FakeConsole();
      program = new Program(console);
    };

    Because of = ()=>
      exitCode = program.Run(new string[] {missingAssemblyName});

    It should_output_an_error_message_with_the_name_of_the_missing_assembly = ()=>
      console.Lines.ShouldContain(string.Format(Resources.MissingAssemblyError, 
      missingAssemblyName));

    It should_return_the_Error_exit_code = ()=>
      exitCode.ShouldEqual(ExitCode.Error);

    const string missingAssemblyName = "Some.Missing.Assembly.dll";
    public static ExitCode exitCode;
    public static Program program;
    public static FakeConsole console;
  }

There have been a few semantic changes

  • The Description attribute has been removed. There is now an optional Concern attribute that allows you to specify a type and/or a string that the context/spec is concerned with.
  • Context before_each is now Establish context.
  • Context before_all is now Establish context_once.
  • Context after_each is now Cleanup after_each.
  • Context after_all is now Cleanup after_each.
  • When {...} is now Because of. This is closer to SpecUnit.NET's verbage, and doesn't force you to specify the "when" twice.

There is now a console runner

We don't quite have all the options we want yet, but the basics of the runner are working. Here's the help from the runner:

We also stole Bellware's SpecUnit.NET reporting stuff and ported it over. You can now generate a report on your specs with the --html switch. Here's an example run:

This is the report it generates.

Want to try it out?

  1. Grab the drop here.
  2. Extract it somewhere. Put it somewhere semi-permanent because the TestDriven.NET runner will need a static location for the MSpec TDNet Runner.
  3. If you want TestDriven.NET support, run InstallTDNetRunner.bat
  4. Check out the example in Machine.Specifications.Example. Note that you can run with TD.NET.
  5. Create a project of your own. Just add Machine.Specifications.dll and get started.
  6. Send me feedback! Leave comments, email me, tweet me, whatever.

Also, this is part of Machine, so feel free to take a look at the code and/or submit patches. There's also a Gallio adapter in there, but I didn't include it in the release as it's not quite polished enough yet. If you're interested in it, talk to me. Special thanks to Scott Bellware, Jeff Brown and Jamie Cansdale for their help and support. Also, extra special thanks to Eleutian's newest dev, Jeff Olson for much of the recent work that has gone into MSpec!

by Aaron on Tuesday, September 02, 2008 1:49:51 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  |  Trackback
 Tuesday, July 01, 2008

Mikel Lindsaar recently posted a tip encouraging rSpec users to not use before :each, and set up the context in every "it" specification.

I'm afraid I disagree. By pushing context setup into your specifications, you're allowing your contexts to become artificial and anemic and your specifications to become fat and more than just specifications.

Ultimately, this means that your reports will read poorly and it will be easy to introduce specifications in a context that do not match the others.

Mikel arrives at the following specs at the end of his post:

describe "when not logged in" do
  it "should redirect if we are not logged in" do
    get :index
    response.should redirect_to login_path
  end
end

describe "when logged in" do
  def given_a_logged_in_user
    session[:logged_in] = true
    session[:user_id] = 99
  end

  it "should let be a success" do
    given_a_logged_in_user
    get :index
    response.should be_success
  end

  it "should render the index template" do
    given_a_logged_in_user
    get :index
    response.should render_template('people/index')
  end
end
"when logged in" is not what I would consider a valid description of Mikel's context in these specs. I would call it something along the lines of "when visiting the index page while logged in". *That* is the context you are specifying against. Compare:

when logged in, it should render the index template

vs.

when visiting the index page while logged in, it should render the index template

The first is clearly missing something. Unless rendering the index template is a direct result of just *being* logged in, the spec is flawed.

With that in mind, as soon as you describe your context, there's no reason to not pull that context setup into a single before method. It forces you to use that context in every specification contained within your describe. It also makes your tests easier to read. You establish your context, and then you make one line specifications against that context.

I do agree that DRY should not be taken too far in tests. Base classes, helper methods, all that sort of thing can quickly obfuscate them, but do not forsake the context setup.
bdd | rspec | dry
by Aaron on Tuesday, July 01, 2008 7:04:07 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback
 Thursday, May 08, 2008

Just wanted to quickly note that I tracked down the performance issue in Rhino.Mocks and patched it. I also updated the original post with the new numbers. Enjoy!

by Aaron on Thursday, May 08, 2008 9:56:48 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback
UPDATE: I tracked down the issue and committed a patch to Rhino.Mocks. Rhino.Mocks is now much more competetive performance wise, our CI build time nearly halved, and about 4 minutes out of 7 of our test time has disappeared. New numbers below.

I've complained before that Mocking is Slow but I never really dove further into it. Today I decided to actually compare Rhino.Mocks to other mock frameworks on a pure performance basis to see if it was a global problem. I timed 2000 unit tests across 100 classes with 20 tests each. The results were a bit surprising:

Framework TD.NET Time nunit-console Time
Rhino.Mocks old trunk 57.36s 28.82s
Rhino.Mocks new trunk 22.94s 7.59s
Moq trunk 18.30s 5.91s
TypeMock 4.2.3 Reflective Mocks 15.36s 9.35s
TypeMock 4.2.3 Natural Mocks 16.92s 9.56s

That's right, according to these tests, Rhino.Mocks is at least 3 times slower than the other frameworks when under heavy load in TD.NET and five times slower in the console according to these tests. It's also interesting to note that TypeMock is faster than Moq in TD.NET, but slower in the console runner.

While running the Rhino.Mocks tests it is very clear that there is a degrading performance issue. All the other frameworks executed tests with a near constant speed per test, but Rhino.Mocks slowed down noticeably about half way through.

Please feel free to try it yourself, grab the project here. You should be able to just run the 4 strategy .bat files (run-rhino, run-moq, run-tmock-reflective, run-tmock-natural). Let me know if you find anything interesting.

by Aaron on Thursday, May 08, 2008 7:52:30 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback

As some of you who follow me on twitter know, I've been working on Yet Another Context/Specification Framework as an experiment. Yeah, I know we already have NSpec and NBehave, and they're great and all, but MSpec takes things on from a slightly different angle, and it's just an experiment (for now). Here's a sample Description:

[Description]
public class Transferring_between_from_account_and_to_account
{
  static Account fromAccount;
  static Account toAccount;

  Context before_each =()=>
  {
    fromAccount = new Account {Balance = 1m};
    toAccount = new Account {Balance = 1m};
  };
  
  When the_transfer_is_made =()=>
  {
    fromAccount.Transfer(1m, toAccount);
  };
   
  It should_debit_the_from_account_by_the_amount_transferred =()=>
  {
    fromAccount.Balance.ShouldEqual(0m);
  };

  It should_credit_the_to_account_by_the_amount_transferred =()=>
  {
    toAccount.Balance.ShouldEqual(2m);
  };
}

And a TestDriven.NET run:

------ Test started: Assembly: Machine.Specifications.Example.dll ------

Transferring between from account and to account
  When the transfer is made
    * It should debit the from account by the amount transferred
    * It should credit the to account by the amount transferred


2 passed, 0 failed, 0 skipped, took 0.79 seconds.

Err, What?

Different eh? The idea was heavily inspired by Scott Bellware's SpecUnit.Net framework he showed at the ALT.NET conference. It also took heavy cues from RSpec and my insanity. I realize that the the code doesn't look much like C# code and I'm OK with that. Many have and will ask why I don't just use Boo or RSpec w/ IronRuby eventually or even one of the existing Context/Spec/BDD frameworks. Those are good questions, but my main motivations are tooling and syntax. I enjoy the tooling I get in C# and I personally like the syntax in this library considering the limitations imposed by C#.

How's it work?

The simplest way to describe it is to compare it to a normal *Unit style testing framework:

  • Description = TestContext
  • Context before_each = SetUp
  • Context before_all = SetUpFixture
  • Context after_each = TearDown
  • Context after_all = TearDownFixture
  • When = Also SetUp, but happens after Context before_each
  • It = Test

Rather than methods and attributes, MSpec uses named delegates and anonymous functions. The only reason for this is readability. You'll also notice that the fields used in the context are static. This is necessary so that the anonymous functions in the field initializers can access them. Probably the first thing you noticed is the =()=> construct. I won't mention the names that this was given on twitter, but I think it's an acceptable thing to have to deal with in exchange for the cleanliness of the rest of the syntax.

Ok, you're crazy, but how do I try it?

First, this is a very rough cut. Everything is subject to change as we experiment with the language. That said, here's how you play with it:

  1. Grab the drop here.
  2. Extract it somewhere. Put it somewhere semi-permanent because the TestDriven.NET runner will need a static location for the MSpec TDNet Runner.
  3. If you want TestDriven.NET support, run InstallTDNetRunner.bat
  4. Check out the example in Machine.Specifications.Example. Note that you can run with TD.NET.
  5. Create a project of your own. Just add Machine.Specifications.dll and get started.
  6. Send me feedback! Leave comments, email me, tweet me, whatever.

Also, this is part of Machine, so feel free to take a look at the code and/or submit patches. There's also a Gallio adapter in there, but I didn't include it in the release as it's not quite polished enough yet. If you're interested in it, talk to me. Special thanks to Scott Bellware, Jeff Brown and Jamie Cansdale for their help and support.

by Aaron on Wednesday, May 07, 2008 11:11:27 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [8]  |  Trackback
 Friday, April 25, 2008

Introduction

We used ActiveRecord migrations at Eleutian for a number of months. Everything was right with the world and we had no major complaints until we started running into a problem with our more complex migration scenarios. Usually, we see two kinds of migrations:

  • Schema - Add/remove/rename columns, manipulate tables, and general schema related changes.
  • Data - Refactoring data into new schemas and major data reorganization or population.

Schema migrations are handled great by most of the migration frameworks out there. One problem we were running into was it's typically easier to write the data migrations in C# directly using our entities and services. As a result we had two different ways of migrating and this made our frequent publishes problematic. We decided to consolidate the two operations.

On the outside there's nothing really new in Machine.Migrations that you haven't seen elsewhere. Most of the changes are internal. Migrations are numbered right now, although we've talked of moving to time-stamping them a la the newer ruby ActiveRecord. What Machine.Migrations does allow you to do is heavily tweak your migrations to fit your project. In our project we have a custom MsBuild task that extends the default implementation to provide access to our DaoFactory and other goodies. If this is something that interests you, then read on. If you're happy with your current migration scenario then you probably don't have a huge need for Machine.Migrations.

A Simple Migration

I'll start with the actual migrations. Here is an example that creates a table:

using System;
using System.Collections.Generic;
using Machine.Migrations;

public class CreateUserTable : SimpleMigration
{
  public override void Up()
  {
    Schema.AddTable("users", new Column[] {
      new Column("Id", typeof(Int32), 4, true, false),
      new Column("Name", typeof (string), 64, false, false),
      new Column("Email", typeof (string), 64, false, false),
      new Column("Login", typeof (string), 64, false, false),
      new Column("Password", typeof (string), 64, false, false),
    });
  }

  public override void Down()
  {
    Schema.DropTable("users");
  }
}

Machine.Migrations does enforce a file naming/class naming restriction so the file for this migration would be named 001_create_user_table.cs. We inherit from SimpleMigration, which is built-in and provides access to Schema manipulations and raw Database queries through those properties respectively. We have a solution and project for our migrations so that we can get IntelliSense while writing them. This can be helpful if you forget the syntax.

When you run Machine.Migrations it will only compile the migrations that it needs to apply. This is very handy if you are doing high-level migrations including your entities because then you don't have to maintain them after they've been applied to your live database. But, this does mean you won't be able to migrate from the beginning to the end because migrations will become outdated. This has been a minor problem for us because we snapshot our live database and run that in development. I do think a better long term solution is required though, in order to make creating a brand new database easier. Perhaps flagging migrations as data migrations and not running them if they don't compile? I am not sure.

There is also a Boo migration factory that you can use if you don't want to write your migrations in C#. Internally, the infrastructure is there if you want to add other languages as well.

Another interesting thing I should note is that each migration is run inside its own transaction, so if you mess up it'll just rollback. This is very nice under SQL Server because schema changes are rolled back as well.

How Do I Start Using It?

If you just want to use Machine.Migrations out of the box (a good first step) it's fairly straightforward:

  1. Copy Machine.Migrations and its dependencies into your project. Be sure to include the Machine.Migrations.targets file which has the default MsBuild task and target.
  2. You can then drop the following XML into your MsBuild file (check all your paths!):
    <Import Project="Libraries\Machine\Machine.Migrations.targets" />
    
    <PropertyGroup>
      <MigrationConnectionString>Data Source=127.0.0.1;Initial Catalog=MyDb;Integrated Security=SSPI</MigrationConnectionString>
    </PropertyGroup>
    
    <Target Name="Migrate" DependsOnTargets="MigrateDatabase">
    </Target>
  3. All that's happening here is we're importing the targets file, defining a connection string to use when migrating, and then making a target that depends on the target we imported.
  4. By default it will look for your migrations in a subdirectory named Migrations.
  5. Then it's just a matter of running MsBuild YourProject.proj /t:Migrate where YourProject.proj is your MsBuild file.

One thing you may notice is that sometimes your migrations will fail to compile because of a missing dependency. In order to ensure your own or third party assemblies are referenced you add them to the MigrationReferences ItemGroup, like so:

<ItemGroup>
  <MigrationReferences Include="System.Xml.dll">
    <InProject>false</InProject>
  </MigrationReferences>
</ItemGroup>

When Machine.Migrations compiles it will include those assemblies in the references list.

What Next?

I'm going to hold off on explaining how to extend the default migration scenario until later because this post is already pretty long. For anyone feeling ambitious it basically amounts to extending SimpleMigration in your own code and inheriting from that. (Be sure to include your assembly in MigrationReferences). We go a step further and wrap the MsBuild task as well, which may or may not be necessary for you.

I can't stress enough that going down this road isn't for everybody. There is an argument for keeping migrations simple and making those complex data migrations something else's concern. For now, this is what we're doing and it has worked nicely. I'm sure there will be lots of questions as well, please feel free to comment and offer up suggestions. As far as I know we're the only people using this code, so I'm sure there are quirks.

by Jacob on Friday, April 25, 2008 12:37:26 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  |  Trackback