Development Blog

 Thursday, April 03, 2008

Daniel Cazzulino, author of Moq posted a good comment on my last post where I suggested looking into a Mockito like syntax for .NET Mock Frameworks.

On the surface, Mockito's approach seems good. But if you do the "true" comparison, you'll see that stub(...) is exactly the same as mock.Expect(...) in Moq.

Then, when you do verify(...), you have to basically repeat the exact same expression you put in stub(...). This might work if you only have a couple calls to verify, but for anything else, it will be a lot of repeated code, I'm afraid.

I thought this too. See my comment here from a month ago. Szczepan made a good point and I've thought about it more since then.

When combined with my position on loose vs strict mocks (almost always use loose), I'd say that *most* of the time you are either stubbing or verifying. Meaning, if you're verifying you don't need to stub unless of course that method returns something that is critical to the flow of your test, in which case you don't really need to verify, because the flow would have verified. That's a mouthful, but does that make sense?

I haven't used mockito, and I know there are times I use Expect.Call with return values that matter (which would essentially require you to duplicate stub & verify), but maybe that's a smell? Maybe if you think you need that you can do state based testing or change your API?

Here's an example Test using Rhino.Mocks:

public void SomeMethod_Always_CallsSendMail()
  IMailSender sender = mocks.DynamicMock();
  UnderTest underTest = new UnderTest(sender);

  using (mocks.Record())



And some code this is testing (obviously not test driven, but you get the idea):

public void SomeMethod()
  if (!_sender.SendMail())
    throw new Exception("OH NOS");

Notice that here we would need to stub and verify separately with Mockito like syntax. This would look something like this:

public void SomeMethod_Always_CallsSendMail()
  IMailSender sender = mocks.DynamicMock();
  UnderTest underTest = new UnderTest(sender);

  Stub.That(() => sender.SendMail()).Returns(true);


  Verify.That(() => sender.SendMail()).WasCalled();

This may violate DRY, but what if you designed your API differently? Maybe SendMail should throw an exception on failure instead of returning a boolean? This would make the return value unnecessary and remove the need for the Stub call. Clearly you can't always do this, especially with unwrapped legacy or API code, but it's something to think about.

Also, I think you shouldn't be verifying more than one method generally to go along with the one assert/test rule, so a single repeat would not be that horrendous. Heck, you could even do:

public void SomeMethod_Always_CallsSendMail()
  IMailSender sender = mocks.DynamicMock();
  UnderTest underTest = new UnderTest(sender);

  Stub.That(var sendMail = () => sender.SendMail()).Returns(true);



I think the syntax would lead to better, more concise tests. But maybe it would just be too annoying? I wouldn't know until I tried it for a while I guess.

by Aaron on Thursday, April 03, 2008 8:44:29 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  |  Trackback
 Wednesday, April 02, 2008

(Note: I'm going to speak about .NET mock projects here for the most part, but most of them have Java quasi-equivalents.)

The original mocking frameworks like NMock required you to setup expectations by passing strings for method names. This was fragile and made refactoring more difficult.

A few mock frameworks now allow you to define expectations and mock results in a strongly typed manner. Rhino Mocks and TypeMock use a record/replay method to setup expectations. The record replay method is mostly necessary because the same calls are made on the same objects under two different scenarios. This leads to a few issues.

The first issue is confusion and barrier to entry. Many people have complained that the Record/Replay method is not straight forward and the whole paradigm is confusing. There are also complains about the naming, are you really recording and then replaying? It's just kind of a strange thing. Of course most of us learn to live with it, understand it, and accept it for what it is. Recently though, a few mock frameworks have popped up that do away with this model.

In the .NET world we have Moq. Moq gets rid of the need for record/replay because recordings have a very different syntax. They use lambdas instead of actual calls to the mock object. This allows the framework to know when you are recording an expectation and when you are fulfilling an expectation. It adds a bit of noise in the form of "() =>" but all in all it's not bad. Of course this requires C# 3.0, but it's good to keep looking ahead.

In the Java world we have Mockito. Mockito also does away with the record/replay model but it does it in a different way. At first I wasn't a fan, but thinking about it more, I like it. Mockito has two main apis, stub and verify. Stub is equivalent to SetupResult.For, and verify is equivalent to Expect.Call with a verify. The interesting bit is that the stubbing happens before the the class under test is invoked and the verifying (which includes describing the method to be verified) happens after the class under test is invoked. This is best shown with an example stolen from the Mockito site:

  //stubbing using built-in anyInt() argument matcher
  //stubbing using hamcrest (let's say isValid() returns your own hamcrest matcher):
  //following prints "element"
  //you can also verify using argument matcher

Obviously it would take a bit of imagination to arrive at a .NET equivalent, but you get the idea. I like this because the normal test structure is Setup Stuff->Do Stuff to Test->Verify Stuff did what it should have. The normal record/replay model requires you to set up verifications before you actually Do Stuff (though you call VerifyAll afterwards). This is a bit less natural. I feel syntax like this (yeah, I like the new NUnit syntax) would be more intention revealing:

Assert.That(() => someMock.Foo(), Was.Called);


Verify.That(() => someMock.Foo()).WasCalled();

Then you would stub like this:

Stub.That(() => someMock.Bar()).Returns(3);

Note: No idea if this is feasible or makes sense or not, my lambda experience is limited to light reading, but you get the idea. I'm sure the syntax could also be prettier.

Rhino.Mocks is my current mock framework of choice. I'm used to it, I've lightly contributed to it, and I've been working with it for a while now. Despite that, I do think that there is definitely more to explore in the mocking arena especially with C# 3.0.

There are lots of other fun things to talk about too... like TypeMock's magic, but that's another day still...

by Aaron on Wednesday, April 02, 2008 8:13:18 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [3]  |  Trackback

Recently, Aaron talked about how we keep a published branch. Lately, we have also been having to branch our code more frequently in order to work on big messy changes without disturbing ongoing development in the main trunk. Branching  is a great tool to use in those situations, but many find themselves cursing the day they decided to branch when they run into a billion conflicts when trying to merge their changes back to the trunk. doesn't have to be like that! So to help out those who've had nightmarish merge experiences, or to ease the fears of those who have yet to delve into the mysterious world of subversion branches and merges,  I have decided to share my thoughts on proper branching technique.

What is Branching?

Let's start with the basics. What is branching? Why would we need to ever do it? Well...everyone knows the golden rule of using version control systems is to "Check in often".  Checking in often gives us more flexibility if we make a mistake and want to go back to a previous version, and it also helps minimize conflicts when several developers are working on the same piece of code. However, as good a practice as checking in often might be, sometimes it's just hard to do, especially if we need to make a big all-or-nothing type change that will take us an extended period of time. In the meantime we can not afford to check in our half-completed change, because it would break things and prevent or make it very difficult for other developers to keep on working on their own stuff. Luckily however, we can simply make a copy of the current trunk repository and then check in our changes to this copy. We can now check in as often as we'd like without disturbing anyone. When we are done with our big change, we can merge it all back at once and everyone is happy. We call this copy a "branch".

How do we Branch?

Branching is easy. Subversion does not really have a built-in concept of a "branch". A branch is simply a copy of the your projects directory in the subversion repository. Where you copy it to is entirely up to you, but a good convention is generally a directory structure like:


The URL for your main repository would look something like:

while the URL for your branch would be:

To create the branch we would simply use the subversion cp command:

svn cp

now we could simply check out the new branch:

svn co

and work on it to our heart's content Any changes we committed would go to the branch and not disturb our main trunk code.

Keeping the Branch Up to Date and Merging

Now, here is the tricky part that many people screw up. While the whole point of the branch is to keep changes we make on the branch isolated from trunk, we DO NOT want to keep changes that are going on in trunk isolated from the branch. This is a recipe for disaster...or rather... a very painful merge. You see, the longer we keep the branch completely isolated from trunk, the greater the likelihood that nasty conflicts will happen.

The idea then is to take the changes that are going on in our project's trunk and apply them to our branch. Sure, this won't guarantee that you won't see any conflicts, but it is exponentially easier to resolve conflicts incrementally as you go along than to let them pile up until the end when you've already forgotten what the heck you were actually doing in the code that is now conflicting.

Enter the "merge" command. I think a lot of people misunderstand what this command does because it's name is misleading and makes it seem more mysterious than it actually is. I personally think this command should be renamed "diff and patch", because that is exactly what it does.

So how do we we use the merge command in order to keep the branch synchronized? Well, first of all let's talk about what we are trying to do. Say we were at revision 100 when we did the "svn cp" command that created our branch. Then we worked on our branch a few hours, did several commits to it. Meanwhile, other developers have continued working on trunk and also made several commits. At the end of the day we want to incorporate all changes that have been made to the trunk since we branched. This is easy! We go into the root of our branch directory and execute:

svn merge

Actually, there are shorter ways to write this, but lets stick to the long way for the time being and explain what the command actually does. The svn merge command needs only two parameters to indicate what changes we want. In this case we are asking svn to get all the changes that happened in trunk from revision 100 to the HEAD revision and then apply them to our current working copy.

It is important to understand that merge does not actually modify the repository at all, it simply patches our working copy with the result of diffing /Trunk@Head and /Trunk@100. Now you can take a look at the results, resolve any conflicts, make sure everything compiles fine and then commit all changes to the branch. You will probably want to add a commit message such as "merged in changes up to Trunk@####", where #### would be whatever version number your Trunk was at when you ran the merge command. Why do this? Well the next time you merge you'll only want to incorporate changes that happened to trunk since your last merge, so it's a good idea to keep track of what changes you have already merged so you don't try to merge them again.

It's a good idea to keep merging changes from trunk into your branch as often as possible, although probably not as often as you'd run an svn update under normal circumstances. Once a day should be good enough for most people, but if you start to see lots of conflicts you might want to do it more often.

Merging it all back to Trunk

So you finished your big change, everything works, and it's time to merge all your changes back to the trunk so that all your fellow developers can be amazed by how great a job you have done. Well, if you followed my advice and regularly merged changes from trunk into your branch, then this is actually a very painless task. When you think about it, you've already incorporated all the changes the other developers made in trunk into your branch, so your branch already looks like what you want the final result to be. All we have to do now is make our trunk look exactly like our branch and we are done. To do this, we should go into the directory in which we have our trunk checked out and run the following command:

svn merge

WHAT?? I think this command needs a little explanation, but it's not hard if we understand the diff+patch concept. Basically we are asking subversion to diff our branch and the trunk and then patch these changes into Trunk (you did remember to cd to trunk's working dir right?). In other words, we are telling subversion to give us all the changes needed to get us from Trunk@HEAD to MyBranch@HEAD and then apply those changes to Trunk's working dir. If we patch Trunk with the changes necessary to go from Trunk -> MyBranch then what does Trunk end up looking like? That's right..... it should end up being exactly like MyBranch. Now all that is left to do is commit the changes  and we should have successfully accomplished a branch/merge.

Other fun uses of Merge

Once you understand that merge is simply diff+patch, you can find other creative uses. How many times have you committed something and then realized you screwed up and wished you could undo that commit? Well merge can be used to quickly undo your commit:

svn merge @HEAD @PREV
svn commit -m "Undoing my last commit"

or you might just undo one file

svn merge MyBadMistake.cs@HEAD MyBadMistake@PREV

The possibilities are endless!


I hope I've managed to demystify the svn merge command somewhat. If you already knew all this... well.. sorry to bore you. But if I saved at least one poor soul out there from the headaches of branching/merging the hard way, then I have accomplished what I set out to do.

by Dan on Wednesday, April 02, 2008 12:31:54 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [10]  |  Trackback
 Tuesday, April 01, 2008

Well, here I am. I'm the newest member of the Eleutian dev team. I've been settling in at Eleutian for the last couple of months and thought it'd be as good  a time as any to introduce myself and start blogging. Prior to Eleutian I worked on web applications in Java, so so I've been settling into the whole C#/.NET thing over the last few months as well. Hopefully my ramblings will be of some use to someone.

by Dan on Tuesday, April 01, 2008 10:20:08 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |  Trackback
 Wednesday, March 26, 2008

Moq has been getting some press lately because it's the newest mock framework on the block. I think it's certainly interesting and I'll have more to say on it later, but I wanted to briefly complain about one aspect real quick.

Moq touts that it has a more simplified API when compared to something like Rhino.Mocks. The API has a single entry point which certainly aids discoverability, but I question one of the design decisions. I remember seeing someone say that Rhino had too many types of mocks and that that was confusing. Well, I don't think it has this many different Mock Behaviors:

  public enum MockBehavior
    Default = Normal,

Why have this many? Does anyone know what they do just by looking at them? At least they're documented, but the docs are quite a mouthful:

  public enum MockBehavior
    /// Causes the mock to always throw 
    /// an exception for invocations that don't have a 
    /// corresponding expectation.
    /// Matches the behavior of classes and interfaces 
    /// in equivalent manual mocks: abstract methods 
    /// need to have an expectation (override), as well 
    /// as all interface members. Other members (virtual 
    /// and non-virtual) can be called freely and will end up 
    /// invoking the implementation on the target type if available.
    /// Will only throw exceptions for abstract methods and 
    /// interface members which need to return a value and 
    /// don't have a corresponding expectation.
    /// Will never throw exceptions, returning default  
    /// values when necessary (null for reference types 
    /// or zero for value types).
    /// Default mock behavior, which equals .
    Default = Normal,

I'm of the opinion that you should only have one type of Mock, and that's what Rhino calls Dynamic and Moq calls Loose. I described why here. If I wanted to simplify mocking, I'd start here.

by Aaron on Wednesday, March 26, 2008 8:29:45 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [4]  |  Trackback
 Wednesday, March 19, 2008

As Dave so promptly pointed out immediately after our grueling hazing that I still have not yet recovered from, Jacob and I have been invited to join CodeBetter. I'm very excited to do so, thanks to everyone at CodeBetter!

I'm going to cross-post to both our Eleutian Blog and my CodeBetter Blog for now, so don't worry about adjusting your subscriptions unless you start seeing double... and don't want to read my posts twice.

Thanks again to CodeBetter for having us!

by Aaron on Wednesday, March 19, 2008 8:43:08 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  |  Trackback
 Monday, March 17, 2008

Here's the third installment of our Vim Screencast. In this one I cover a few ways to move around a document quickly with search.

If you want to see the old screencasts you can see them here:

Also, if you're new to our blog, be sure to subscribe to get new screencasts as I release them!

And special thanks to Roy Osherove for Keyboard Jedi.

You need to upgrade your Flash Player to at
least version 8 to view this video content.

Click here to download the latest version of Flash Player.

Click here to save

You can get Vim here, but I'd ultimately recommend ViEmu, a Vim emulator plugin for Visual Studio.

by Aaron on Monday, March 17, 2008 5:32:46 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [6]  |  Trackback
 Friday, March 07, 2008

Sean Chambers asked how we work with a published branch so I figured I'd post on the topic since it's a somewhat interesting one. It's not trivial and it took us a few tries to get it where it is now, and it's still not quite right.

The first step to using a published branch is to create the branch. You can do that like this:

svn cp https://yoursvnserver/svn/project/trunk \
https://yoursvnserver/svn/project/branches/published \
-m "Branching published"

After that, we actually check out the whole branch. It's very useful to have both the pubilshed branch and the trunk on one machine. Note that this isn't necessarily trivial, and its feasibility entirely depends on your build. Having a location agnostic build is a very important thing, and this is one of the reasons.

Now we have both trunk and published branches. We almost always do all of our work in trunk and then merge over to published. Jacob actually wrote a PowerShell script to make the merges easier (from merge.ps1):

param([int]$rev, [string]$reverse)

if (!$rev)
  echo "Merge.ps1 REV [REVERSE]"
if (!$reverse)
  $branch = "Published"
  $url = "https://yoursvnserver/svn/project/trunk"
  $branch = "Trunk"
  $url = "https://yoursvnserver/svn/project/published"

$previous = $rev - 1
$range = $previous.ToString() + ":" + $rev

pushd $branch
svn merge -r $range $url

The reason for doing most of the work in trunk is that often times any issues we run into on the published site will still be issues in the trunk. It makes sense to apply the work there first and then merge it over. The only time we patch published is when we need to apply a hack to make something work so that we can fix it the right way on the trunk later, or maybe that feature is completely different on the trunk and the fix would not apply. Of course this is dangerous and you have to be sure to remember to fix the underlying issue in the trunk before publishing from trunk again. Bug trackers help with that.

That brings us to publishing from trunk again. Merging everything from trunk into the published branch is a giant pain, and just won't work if you've applied many hacks to the published branch. I strongly advise against this. Instead, just start over:

svn rm https://yoursvnserver/svn/project/branches/published \
-m "Deleteing published branch"
svn cp https://yoursvnserver/svn/project/trunk \
https://yoursvnserver/svn/project/branches/published \
-m "Branching published"

By nuking it and recopying it you can just svn up on your published branch and you'll have everything from trunk. For whatever reason, not removing it before copying it again caused us issues. I'd recommend this two phase approach.

Other things we learned in the course of this are the pros and cons of shared "stuff". We have at least 10 gigs of course content and a few other resources that don't need to be in the separate branches like our main trunk and published. We pulled those into their own repositories and keep them in shared directories so both installs can reference it. We also have a set of common build scripts between the two. This is both a good and a bad thing. It's good because it removes some duplication and it allows us to use a separate repo for these scripts (which is handy for some TC build configurations that only need the scripts, though I guess we could just checkout the scripts directory from trunk...) but it's bad because sometimes things will get out of sync. We'll make changes to the shared scripts, and fix the trunk, but it doesn't make sense or it's prohibitive to fix the published branch. You can see in the screen shot I posted our published branch is currently failing. This is likely why. I'd probably recommend keeping all build scripts branched so that you don't run into these sort of issues.

The next thing to worry about is the database. I mentioned that we use multiple databases on the build server. What I didn't mention is that we also use multiple databases on our dev machines. We usually have three. One for trunk, one for trunk tests and one for published. The trunk tests db is imported "light" so our tests run in 6 minutes instead of 14. The trunk db is a nearly full copy of the production db so we have data when we poke around the site on our dev machines. We have a ConnectionStrings.config that is generated from the database info you pass the build script. You can do something like: msbuild Default.msbuild /p:DatabaseName=published and it'd build with the appropriate connection string.

For web applications you also have to worry about IIS. We have two web applications configured in IIS. One for trunk and one for published. This allows us to easily switch between the two by just changing our port or vdir in our url. They have multiple virtual directories underneath them that point to our various shared directories.

I think that's most of the tips I can think of right now. Let me know if you have any questions about anything.

by Aaron on Friday, March 07, 2008 4:21:29 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [8]  |  Trackback

projectsI've mentioned before how much I like TeamCity, but I didn't really talk about how we use it. Just recently Jacob completed some work on multiple build configurations that make our life much easier. I thought I'd go over them here to give you an idea of how we handle continuous integration here.

Continuous Integration configurations

  • CI - Trunk - This is the CI configuration everyone has. This watches the trunk of our SVN repository and builds whenever it changes it will also run our database migrations against the CI trunk database. It has all our built assemblies as artifacts. The artifacts take up a lot of space so we don't keep them around that long.
  • CI - Published - This is just like CI - Trunk, but it watches our published branch, which is where we put everything that we're about to publish to the live site. We keep two branches so that we can make quick fixes to the published site without having to publish new features we're working on. This has its own database that is migrated.
  • Nightly - Trunk - This runs daily rather than watching our source control. It migrates a database on our production database server that is a copy of our live database. It also builds and deploys the trunk to a test address on our production servers. This allows our team in Korea and our stakeholders to see changes every day in a safe environment. The Nightly is also a big part of our localization story, which I'll save for another post.

Database build configurations

  • Snapshot - This and the other db build configurations probably deserve their own post with more details, but I'll do my best to explain these briefly. The Snapshot build configuration takes a point in time snapshot of the database and packages it into a zip file. The zip file becomes a TeamCity artifact that other projects can depend on.
  • Nightly/CI Trunk/CI Published Baseline -

    These configurations import database snapshots into the database they refer to. The only time we need to do this is if a bad migration runs, or we want to "refresh" the data inside the database.

    It is important to note that we do not run CI - Trunk on a complete snapshot of the live database. When we do, it greatly increases the build time because our integration tests run significantly slower in a real database. Instead, we import a "light" database which contains all of the tables, but only the data from our static tables. The users, records, and anything else that grows as we get more and more users are just left empty. This means that we have zero sample data for these things during our integration tests, so we rely heavily on Fluent Fixtures to set up sample data.

    The other two databases do run nearly complete copies of the live database (we exclude log tables basically), so we still get to test our migrations and our site on real data.

Utility build configurations

  • StatSVN - Runs StatSVN on our codebase. Somewhat useful source statistics. Mostly use it to see growth and churn.
  • Duplicate Finder - Haven't really done this much to be honest, but TeamCity has build configurations whose sole purpose is finding duplicate code.

I'm completely enamored with this set of configurations. It makes so many things so painless. We still have room for improvement, managing all of the configuration differences in the sites is difficult. We also lack a one click live publish ability. We still follow a manual script for that, which is error prone and dangerous.

by Aaron on Friday, March 07, 2008 6:40:24 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  |  Trackback
 Saturday, February 09, 2008

Jeremy has always been a bit of a sceptic when it came to the AutoMockingContainer, and he's not alone, apparently hammett isn't a fan either (any explanation hammett?). I could never understand why he's sceptical, but it looks like he's starting to get a little more curious as he's including an AMC in StructureMap 2.5.

There are a few interesting things in his implementation. The most noticeable difference is that  he extends the MockRepository. This means you don't need two objects floating around, your MockRepository and your AutoMockingContainer, you just combine them. The biggest issue I see with that is you can't do things like this with HubServices:

MyClass foo = _container.Create<MyClass>();

I do this all the time when I use hub services, since it is much easier to just use the real implementation of HubService as all it does is expose its child services as properties and those are the things I really want to mock.

I should also reiterate the benefits of using an AMC friendly test fixture that has instance helper methods for Create, Get, AddService, etc, and now I'm seeing people offload things like mocks.Record and mocks.Playback; I like this too.  The "mocks.ClassUnderTest" is definitely too verbose for me.

The other big question I have to ask is... why have a StructureMap AMC? Why not just use the existing one? Yes, it uses Windsor behind the scenes, but that doesn't matter does it? If you're worried about your test dependencies I suppose it does, but are you? Don't get me wrong, I'd jump ship in a heartbeat to the StructureMap AMC if it was proven faster as a result of using StructureMap instead of Windsor, but I think that's about the only reason, since most features could easily be added to keep in parity. Especially now that the AMC has been so well maintained since it's a part of RhinoTools now.

by Aaron on Saturday, February 09, 2008 10:52:23 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [5]  |  Trackback