Development Blog

 Wednesday, March 26, 2008

Moq has been getting some press lately because it's the newest mock framework on the block. I think it's certainly interesting and I'll have more to say on it later, but I wanted to briefly complain about one aspect real quick.

Moq touts that it has a more simplified API when compared to something like Rhino.Mocks. The API has a single entry point which certainly aids discoverability, but I question one of the design decisions. I remember seeing someone say that Rhino had too many types of mocks and that that was confusing. Well, I don't think it has this many different Mock Behaviors:

  public enum MockBehavior
  {
    Strict, 
    Normal,
    Relaxed,
    Loose,
    Default = Normal,
  }

Why have this many? Does anyone know what they do just by looking at them? At least they're documented, but the docs are quite a mouthful:

  public enum MockBehavior
  {
    /// 
    /// Causes the mock to always throw 
    /// an exception for invocations that don't have a 
    /// corresponding expectation.
    /// 
    Strict, 
    /// 
    /// Matches the behavior of classes and interfaces 
    /// in equivalent manual mocks: abstract methods 
    /// need to have an expectation (override), as well 
    /// as all interface members. Other members (virtual 
    /// and non-virtual) can be called freely and will end up 
    /// invoking the implementation on the target type if available.
    /// 
    Normal,
    /// 
    /// Will only throw exceptions for abstract methods and 
    /// interface members which need to return a value and 
    /// don't have a corresponding expectation.
    /// 
    Relaxed,
    /// 
    /// Will never throw exceptions, returning default  
    /// values when necessary (null for reference types 
    /// or zero for value types).
    /// 
    Loose,
    /// 
    /// Default mock behavior, which equals .
    /// 
    Default = Normal,
  }

I'm of the opinion that you should only have one type of Mock, and that's what Rhino calls Dynamic and Moq calls Loose. I described why here. If I wanted to simplify mocking, I'd start here.

by Aaron on Wednesday, March 26, 2008 8:29:45 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [4]  |  Trackback
 Wednesday, March 19, 2008

As Dave so promptly pointed out immediately after our grueling hazing that I still have not yet recovered from, Jacob and I have been invited to join CodeBetter. I'm very excited to do so, thanks to everyone at CodeBetter!

I'm going to cross-post to both our Eleutian Blog and my CodeBetter Blog for now, so don't worry about adjusting your subscriptions unless you start seeing double... and don't want to read my posts twice.

Thanks again to CodeBetter for having us!

by Aaron on Wednesday, March 19, 2008 8:43:08 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  |  Trackback
 Monday, March 17, 2008

Here's the third installment of our Vim Screencast. In this one I cover a few ways to move around a document quickly with search.

If you want to see the old screencasts you can see them here:

Also, if you're new to our blog, be sure to subscribe to get new screencasts as I release them!

And special thanks to Roy Osherove for Keyboard Jedi.

You need to upgrade your Flash Player to at
least version 8 to view this video content.

Click here to download the latest version of Flash Player.

Click here to save

You can get Vim here, but I'd ultimately recommend ViEmu, a Vim emulator plugin for Visual Studio.

by Aaron on Monday, March 17, 2008 5:32:46 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [6]  |  Trackback
 Friday, March 07, 2008

Sean Chambers asked how we work with a published branch so I figured I'd post on the topic since it's a somewhat interesting one. It's not trivial and it took us a few tries to get it where it is now, and it's still not quite right.

The first step to using a published branch is to create the branch. You can do that like this:

svn cp https://yoursvnserver/svn/project/trunk \
https://yoursvnserver/svn/project/branches/published \
-m "Branching published"

After that, we actually check out the whole branch. It's very useful to have both the pubilshed branch and the trunk on one machine. Note that this isn't necessarily trivial, and its feasibility entirely depends on your build. Having a location agnostic build is a very important thing, and this is one of the reasons.

Now we have both trunk and published branches. We almost always do all of our work in trunk and then merge over to published. Jacob actually wrote a PowerShell script to make the merges easier (from merge.ps1):

param([int]$rev, [string]$reverse)

if (!$rev)
{
  echo "Merge.ps1 REV [REVERSE]"
  return;
}
if (!$reverse)
{
  $branch = "Published"
  $url = "https://yoursvnserver/svn/project/trunk"
}
else
{
  $branch = "Trunk"
  $url = "https://yoursvnserver/svn/project/published"
}

$previous = $rev - 1
$range = $previous.ToString() + ":" + $rev

pushd $branch
svn merge -r $range $url
popd

The reason for doing most of the work in trunk is that often times any issues we run into on the published site will still be issues in the trunk. It makes sense to apply the work there first and then merge it over. The only time we patch published is when we need to apply a hack to make something work so that we can fix it the right way on the trunk later, or maybe that feature is completely different on the trunk and the fix would not apply. Of course this is dangerous and you have to be sure to remember to fix the underlying issue in the trunk before publishing from trunk again. Bug trackers help with that.

That brings us to publishing from trunk again. Merging everything from trunk into the published branch is a giant pain, and just won't work if you've applied many hacks to the published branch. I strongly advise against this. Instead, just start over:

svn rm https://yoursvnserver/svn/project/branches/published \
-m "Deleteing published branch"
svn cp https://yoursvnserver/svn/project/trunk \
https://yoursvnserver/svn/project/branches/published \
-m "Branching published"

By nuking it and recopying it you can just svn up on your published branch and you'll have everything from trunk. For whatever reason, not removing it before copying it again caused us issues. I'd recommend this two phase approach.

Other things we learned in the course of this are the pros and cons of shared "stuff". We have at least 10 gigs of course content and a few other resources that don't need to be in the separate branches like our main trunk and published. We pulled those into their own repositories and keep them in shared directories so both installs can reference it. We also have a set of common build scripts between the two. This is both a good and a bad thing. It's good because it removes some duplication and it allows us to use a separate repo for these scripts (which is handy for some TC build configurations that only need the scripts, though I guess we could just checkout the scripts directory from trunk...) but it's bad because sometimes things will get out of sync. We'll make changes to the shared scripts, and fix the trunk, but it doesn't make sense or it's prohibitive to fix the published branch. You can see in the screen shot I posted our published branch is currently failing. This is likely why. I'd probably recommend keeping all build scripts branched so that you don't run into these sort of issues.

The next thing to worry about is the database. I mentioned that we use multiple databases on the build server. What I didn't mention is that we also use multiple databases on our dev machines. We usually have three. One for trunk, one for trunk tests and one for published. The trunk tests db is imported "light" so our tests run in 6 minutes instead of 14. The trunk db is a nearly full copy of the production db so we have data when we poke around the site on our dev machines. We have a ConnectionStrings.config that is generated from the database info you pass the build script. You can do something like: msbuild Default.msbuild /p:DatabaseName=published and it'd build with the appropriate connection string.

For web applications you also have to worry about IIS. We have two web applications configured in IIS. One for trunk and one for published. This allows us to easily switch between the two by just changing our port or vdir in our url. They have multiple virtual directories underneath them that point to our various shared directories.

I think that's most of the tips I can think of right now. Let me know if you have any questions about anything.

by Aaron on Friday, March 07, 2008 4:21:29 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [8]  |  Trackback

projectsI've mentioned before how much I like TeamCity, but I didn't really talk about how we use it. Just recently Jacob completed some work on multiple build configurations that make our life much easier. I thought I'd go over them here to give you an idea of how we handle continuous integration here.

Continuous Integration configurations

  • CI - Trunk - This is the CI configuration everyone has. This watches the trunk of our SVN repository and builds whenever it changes it will also run our database migrations against the CI trunk database. It has all our built assemblies as artifacts. The artifacts take up a lot of space so we don't keep them around that long.
  • CI - Published - This is just like CI - Trunk, but it watches our published branch, which is where we put everything that we're about to publish to the live site. We keep two branches so that we can make quick fixes to the published site without having to publish new features we're working on. This has its own database that is migrated.
  • Nightly - Trunk - This runs daily rather than watching our source control. It migrates a database on our production database server that is a copy of our live database. It also builds and deploys the trunk to a test address on our production servers. This allows our team in Korea and our stakeholders to see changes every day in a safe environment. The Nightly is also a big part of our localization story, which I'll save for another post.

Database build configurations

  • Snapshot - This and the other db build configurations probably deserve their own post with more details, but I'll do my best to explain these briefly. The Snapshot build configuration takes a point in time snapshot of the database and packages it into a zip file. The zip file becomes a TeamCity artifact that other projects can depend on.
  • Nightly/CI Trunk/CI Published Baseline -

    These configurations import database snapshots into the database they refer to. The only time we need to do this is if a bad migration runs, or we want to "refresh" the data inside the database.

    It is important to note that we do not run CI - Trunk on a complete snapshot of the live database. When we do, it greatly increases the build time because our integration tests run significantly slower in a real database. Instead, we import a "light" database which contains all of the tables, but only the data from our static tables. The users, records, and anything else that grows as we get more and more users are just left empty. This means that we have zero sample data for these things during our integration tests, so we rely heavily on Fluent Fixtures to set up sample data.

    The other two databases do run nearly complete copies of the live database (we exclude log tables basically), so we still get to test our migrations and our site on real data.

Utility build configurations

  • StatSVN - Runs StatSVN on our codebase. Somewhat useful source statistics. Mostly use it to see growth and churn.
  • Duplicate Finder - Haven't really done this much to be honest, but TeamCity has build configurations whose sole purpose is finding duplicate code.

I'm completely enamored with this set of configurations. It makes so many things so painless. We still have room for improvement, managing all of the configuration differences in the sites is difficult. We also lack a one click live publish ability. We still follow a manual script for that, which is error prone and dangerous.

by Aaron on Friday, March 07, 2008 6:40:24 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  |  Trackback
 Saturday, February 09, 2008

Jeremy has always been a bit of a sceptic when it came to the AutoMockingContainer, and he's not alone, apparently hammett isn't a fan either (any explanation hammett?). I could never understand why he's sceptical, but it looks like he's starting to get a little more curious as he's including an AMC in StructureMap 2.5.

There are a few interesting things in his implementation. The most noticeable difference is that  he extends the MockRepository. This means you don't need two objects floating around, your MockRepository and your AutoMockingContainer, you just combine them. The biggest issue I see with that is you can't do things like this with HubServices:

_container.AddService(_container.Create<HubService>());
MyClass foo = _container.Create<MyClass>();

I do this all the time when I use hub services, since it is much easier to just use the real implementation of HubService as all it does is expose its child services as properties and those are the things I really want to mock.

I should also reiterate the benefits of using an AMC friendly test fixture that has instance helper methods for Create, Get, AddService, etc, and now I'm seeing people offload things like mocks.Record and mocks.Playback; I like this too.  The "mocks.ClassUnderTest" is definitely too verbose for me.

The other big question I have to ask is... why have a StructureMap AMC? Why not just use the existing one? Yes, it uses Windsor behind the scenes, but that doesn't matter does it? If you're worried about your test dependencies I suppose it does, but are you? Don't get me wrong, I'd jump ship in a heartbeat to the StructureMap AMC if it was proven faster as a result of using StructureMap instead of Windsor, but I think that's about the only reason, since most features could easily be added to keep in parity. Especially now that the AMC has been so well maintained since it's a part of RhinoTools now.

by Aaron on Saturday, February 09, 2008 10:52:23 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [5]  |  Trackback
 Tuesday, February 05, 2008

Here's the second installment of our Vim Screencast.

Also, if you're new to our blog, be sure to subscribe to get new screencasts as I release them!

And special thanks to Roy Osherove for Keyboard Jedi.

You need to upgrade your Flash Player to at
least version 8 to view this video content.

Click here to download the latest version of Flash Player.

Click here to save

You can get Vim here, but I'd ultimately recommend ViEmu, a Vim emulator plugin for Visual Studio.

by Aaron on Tuesday, February 05, 2008 5:01:09 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [4]  |  Trackback
 Friday, January 18, 2008

So here it is, finally. In this screencast I cover the very basics of Vim. It's about 9 minutes long. Obviously this is my first one, so please give me feedback so I can make the next ones better.

Also, if you're new to our blog, be sure to subscribe to get new screencasts as I release them!

And special thanks to Roy Osherove for Keyboard Jedi.

You need to upgrade your Flash Player to at
least version 8 to view this video content.

Click here to download the latest version of Flash Player.

Click here to save

You can get Vim here, but I'd ultimately recommend ViEmu, a Vim emulator plugin for Visual Studio.

by Aaron on Friday, January 18, 2008 12:36:13 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [7]  |  Trackback