Introducing SpecEasy

Last week, TrackAbout pushed SpecEasy, its first open source project, up to GitHub. I thought I’d give a little history and introduction to the project.

For a good chunk of last year Jeff Sternal and I did remote pair programming, and we both wanted to practice strict TDD/BDD to further develop our thoughts on the practice. We’d both done enough to feel the benefits it gave us in forcing us to think about the design of the code. But we wanted a better feel for how, and if, it changed our speed and code quality.

Our biggest challenge was the large amount of friction that came when trying to follow Red/Green/Refactor. First, our builds were slow, so we worked to speed them up. After that, the in-house BDD framework we were using, which was almost identical to SpecsFor, was way too verbose. If you check out the first example on the SpecsFor home page you’ll see that it takes 40 lines to make three assertions, and if they wanted to add any more, it would be at least 6 additional lines per assertion. This is not the readability increasing whitespace you’re looking for.

In looking at other possibilities we found NSpec and admired the general approach, but we wanted to avoid using a new test runner. We also didn’t really click with the terminology used, because it felt like it made the tests harder to read, and therefore harder to understand.

We felt that if we could make the test/spec writing much smoother, the whole process of TDD/BDD would be more productive and start to give us the gains we hoped for. So over the course of a few sprints we developed SpecEasy1 and used it in developing our assigned backlogs with a BDD approach.

Now would be a good time to check out the readme’s for SpecsFor, NSpec, and SpecEasy.

Our main goal in writing it was to reduce the friction. We wanted the tests be both terse and as readable as possible. We also wanted to eliminate or reduce the duplication that our SpecsFor approach led to. NSpec pointed the way to making things more terse, though we did change quite a few minor things (terminology, syntax) to make it read easier for Jeff and I. We also wanted it to work seamlessly with NUnit, since we already had thousands of NUnit tests in both our website and mobile app solutions. At the time, our code made heavy use of Ninject for resolving dependencies and newing up objects, so we also wanted a solution that (like our previous one) could use a RhinoMocks Ninject mock repository in the test code.

Two things came out of this work that we weren’t really shooting for but that are really nice. Both are side effects of building up tests using strings that describe what’s going on. One of our goals was to reduce how much typing our old SpecsFor-like BDD framework required. In SpecsFor, to reuse test setup required inheritance to build up contexts for tests, or just copying the setup code around. If we used inheritance we were required to repeat aspects of a context in the inheritance syntax, so either way, duplication of information in the tests. But by building up tests using the simple Given/When/Then of BDD, nested contexts, and strings to explain what was going on, we eliminated a lot of the duplication of test descriptions.

The first aspect of this is that by putting those strings at the front of each line of code, they become a syntax-highlighted description of the test code. Check out this picture of a simple set of tests for an on-screen keyboard:

Machine generated alternative text:
public void ProcessRequest()
Keyboardkequest request = null;
ServiceResult<string> response = null;
When(”processing the request”, () => response = SUT.ProcessRequest(request));
Given(() => request = new KeyboardRequest).Verify(() =>
{
Given(”the request uses required validation”, () => request.ValidationType = Valic
Given(”the user enters an invalid value”, () => TypelnvalidValueAndOk(request.
Then(”it shows the expected message”, () => AssertWascalled<IKeyboardView:
Given(”the request uses confirmation validation”, () => request.ValidationType =
Given(”the user enters an invalid value”, () => TypelnvalidValueAndOk(request.
{
Given(”the validation message includes a string format token”, () => reqw
ThenltuisplaysTheExpectedConfirmationScreen(Resources .WarningCaption,
ThenltDisplaysTheExpectedConfirmationScreen(Resources .Warningcaption, req
Then(”it does not show an error message”, () => AssertWasNotCalled<IKeybo
Given(”the user accepts the invalid value”, () => Get<IApplicationcontrol
{
Then(”it returns a success result”, () => response.Status.ShouldEquali
Then(”it returns the invalid value”, () => response.Data.ShouldEqual(’
));
Given(”the user does not accept the invalid value”, () => Get<IApplicatior
Then(”it returns a cancel result”, () n response.Status.ShouldEqual(
);
));

By reading through the red strings, you can get a pretty good feel for what the test is checking, without trying to decipher the actual code performing the test. Because the tests are self-documenting, it’s easy to read the code and add new tests, which is a common thing to do in large codebases.

The second benefit is closely related, but comes in the authoring stage. When writing each statement of a test, we’d start by writing the description of what it would do. This let us think about the test in everyday language. But then we’d immediately translate that description into a closure that actually did the work. The flow that came from doing that worked surprisingly well. And while SpecsFor can be used in a similar way, the additional work required by its more verbose syntax eliminated that flow that we sought, which is so important for effective BDD/TDD.

Once we got things to a point that we really liked using it, we shared it with the TrackAbout team, and others on the team have since made further contributions, including its current name and a sweet logo (thanks, Mike!). We released it on GitHub last week as the first open source project TrackAbout has developed. In addition to sharing it with the world, part of the reason we’re making it public is because there are further changes we’d like to see happen. SpecEasy is far from perfect.

The use of closures to capture variables that are then used while the tests are run leads to some weird quirks. You need to declare variables in your method itself (not within the anonymous delegates that get assembled into actual tests), but you need to initialize the variables in those anonymous delegates, so they get reset when running through different assertions. I’m not really sure how we can improve this aspect of the experience, but it’s something I’d like to spend more time thinking on, and get ideas from the community.

Because we’re using nested Given calls to build up contexts for running tests, and asserting different things, different (but related) contexts have been built up, it can occasionally be hard to get the right parentheses and bracket nesting. Mike Mertsock recently made a great change to reduce the nesting in one way, but I believe there is more we could do to clean up the syntax, further reducing the friction of writing tests.

There are lots of smaller issues as well. Right now, SpecEasy uses Nunit, Ninject, and RhinoMocks. It would be nice to make the usage of those tools optional, so that you can use your favorite test runner and mocking framework. We should make a public NuGet package. We could use an NUnit extension to make each assertion into separate test. In practice, it would be nice to have an easier way to specify multiple Given clauses. And the list goes on.

Anyway, if TDD is your thing, try it out. And if you want to help us solve the remaining issues, just fork the code.

Notes

  1. SpecEasy was originally called NuSpec internally

TrackAbout Tweaks: Windows CE Emulator Images

So, in my attempts to get rid of the gaps in the DataGrids, QA discovered that when TrackAbout was installed on a Windows CE device (as opposed to Windows Mobile) the behavior was slightly different. First of all, with my initial changes, horizontal scrollbars always showed up. When going back and looking at older versions of TrackAbout, they often still appeared.

The big challenge in getting these issues fixed was that I did not have an actual Windows CE device. Actually, I don’t have any device, and do all of my development and debugging on either the device emulator or using our simulator. Our simulator is a regular windows program that displays the same screens that you see on the device, but it runs much faster. It’s a better way to quickly test things during development, even though the fidelity to the actual end-user experience is sometimes low. When we’re ready to hand off our code to QA, we make sure to test it in the device emulator, since it’s a much higher fidelity testing experience.

Unfortunately, in addition to not having a Windows CE device, I did not have a Windows CE device emulator image. I asked around of the devs on the team, and it turned out no one had an image. We all used Windows Mobile device images, which are pretty easy to get online. Some said it was impossible to get a Windows CE image.

But if I couldn’t get a Windows CE image then I only had two options. Either have a new device bought so I could more easily test and figure out what was going on. Or do the slow cycle that involved me making a change (through educated guesswork), handing off to QA, having them show me (via screen sharing) the results, and then making a new educated-guesswork change. Not fun. Not really even worth trying. So instead, I decided to do the “impossible” and find out how I could get a Windows CE device image.

And what I found out is that Microsoft does not make it easy to do. Neither do the device manufacturers. I figured I could find a download on the support page of one of the device manufacturers that would let me do the testing I wanted. But after much fruitless searching I gave up on that. I suppose I could have pushed harder by calling the support departments, but I was doing this in my free time. I also figured Microsoft would have a pack of different Windows CE emulator images that I could downlad. But no. They only make images for Windows Mobile/PocketPC  available. It turns out that if you want a Windows CE emulator image, you have to make it yourself.

So here are the rough steps that I followed:

    1. Make sure you have Visual Studio 2005
    2. Download and install Windows CE
    3. Create a Windows CE project?
    4. Run Platform Builder from within Visual Studio 2005
    5. Choose an emulator image
    6. Choose a handheld device
    7. Add the features you need in your image
    8. Build the image
    9. Find where it got built
    10. Create a Device Emulator config file that points to the image
    11. Run it in the emulator
    12. Turn on DMA connections

    I had to download the Windows CE trial edition so that I could create a Windows CE runtime. These are typically used to build images for actual hardware that manufacturers are testing before they release a new product. Windows CE has a “platform builder”, which is a plugin to Visual Studio 2005 (yes, 2005!), that makes it possible to create a Windows CE “platform” with certain features turned on or off. Once you have a platform defined, you can build an image. It took quite a few trial runs to get the image just made in a way that it could actually be used in the Device Emulator. Once there, it was very much a barebones install of Windows CE, and I got to track down odd tips like how to turn on DMA sync so ActiveSync would work, and how to get the .NET CF installed. But in the end, I had a working Windows CE emulator image running our software.

    What is nice is that, once I had the emulator, it was simple work to figure out what the right metrics were to make the DataGrid gaps disappear. Since then, it also helped me track down another Windows CE-specific bug that cropped up. 

    TrackAbout Tweaks: Building Faster

    One of the challenges we face at TrackAbout is the speed of our builds. TrackAbout Mobile, our compact framework app that runs on rugged Windows CE devices, would take between one and two minutes to do an incremental build. If you add on the time to run unit tests, or deploy to an emulator and start testing manually, the development cycle really starts to slow down. Since Jeff and I have been experimenting with doing “TDD like you mean it”, this was really a pain.

    As it turns out, there is an easy way to speed up builds that use the compact framework. One of the “features” that Microsoft provides for compact development is a platform verification check. This check verifies that all of the features of the .NET framework that you use are available on the platform you are targeting with your code. That’s so you don’t deploy some code to a device that just explodes when it tries to access a non-existent property in the compact framework, which actually exists in the full .NET framework.

    And this little step takes the majority of our build times. Although it’s definitely something we want to have run, it’s not something we need to have run on each build of our app. So to speed up our builds we made it a conditional step and made it possible for the developers to turn it off on their debug builds. First, we checked in a change to our project files that set a new property (SkipPlatformVerification) to true in debug builds. Then, when an individual developer gets sick and tired of slow builds, they can go to their MSBuild targets file and change a single line to make the PlatformVerificationTask conditional on that variable.

    To enable slower builds, they just need to modify one line in the C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.CompactFramework.Common.targets file.

    The line:

    Name="PlatformVerificationTask">

    Should become

    Name="PlatformVerificationTask" Condition="'$(SkipPlatformVerification)' != 'true'" >

    By making the suppression of that task dependent on action by developers, it means all our official builds (from our jenkins server) still do the task. By making it only happen in debug builds, its still easy for developers to make sure their code will run on the compact framework. And by making it possible to skip the step in daily development, we sped up our normal build times from 1-2 minutes down to 5-25 seconds.

    The other day I was on the phone with another developer as he made the change to his targets file and rebuilt. He kicked off the build, leaned back in his chair, then freaked out because his build was already done. 

    It made my day.

    Gaps in the Grid: Scaling DataGrid Columns to Fit on the Compact Framework

    I’m going to write about some little (and big) tweaks that I have tackled either in my free time or in the course solving customer problems here at TrackAbout to improve our code and our development experience. These will range all over the map, and you’ll see that I don’t know all that much, but I love to learn enough to fix things up a bit.

    Since starting at TrackAbout much of my time has been spent on the .NET version of our TrackAbout mobile app. One thing that quickly started to annoy me was small gaps on the right side of the DataGrids. Many of the different workflows in this app include grids of data, often listing assets or other information. But every one of these screens had a gap on the right side. It made the screen look bad, it wasted valuable pixels on these small screens, and it offended my (admittedly limited) design sensibilities.

    Let’s Play “Find the Gap”

    Find the Gap!

    As I worked with a few of these screens I quickly learned that we had a common algorithm for making sure that, whenever the screen size changed, the columns would resize to fit the width of the grid. Although the algorithm for column resizing was common to all the DataGrids, the code was not. It was re-implemented for every screen that had a DataGrid. It was slightly changed depending on the number of columns or the desired relative widths of the columns. It was obvious that it had been written up quickly for the first DataGrid one afternoon, forgotten, and then copied around whenever someone made a new screen with a DataGrid on it. If necessary, they made minor tweaks to the code to make it work with this specific set of columns. So each of the implementations was slightly different.

    Most importantly, none of them eliminated the gap. Every single DataGrid had an annoying gap on the right side because the algorithm was not correct. Although over time the different implementations had diverged somewhat, not a single one did the right thing.

    What was the code trying to do?

    The gap was actually hard-coded into the code, and it was an attempt to make sure horizontal scrollbars didn’t show up. If the algorithm were ever to make the combined width of the columns greater than the width of the DataGrid, a horizontal scrollbar would appear, so there was a magic number right in the middle of it all to force the columns to use less space. But because the algorithm itself didn’t use all the right window metrics and properly handle the different platforms that the code ran on, the only way to get the magic number work was to make it too big. And so in 99% of the cases, there was a 1-12 pixel gap on the right of the grid.

    A simple algorithm looked like this:

    That LAST_COL_MARGIN was trying to cover a few things: the widths of the borders, the widths of the column dividers, how those widths grow in high-DPI mode (which happens on Windows Mobile, but not on Windows CE). Additionally, hiding the horizontal scrollbar happened automatically if you got the columns within the width allowed, so the code in lines 7-9 was just an admission that the rest of the algorithm was wrong.

    Some implementations with two columns replace line 15 with the following (notice the unnamed magic number 6):

    And for an arbitrary number of columns (see if you can spot the bug in this code that adds to the gap):

    I’ll spare you all the other 10 or 15 different implementations.

    How do we fix it?

    Rationalizing all this took some time. The first step was fairly simple - I created a canonical place for the column sizing algorithm. We already used a custom subclass of the DataGrid, so that was the obvious place. Next, I made it work with the simplest possible instance of the algorithm (single column). That was easy to extend to multiple, equally weighted columns. As these were added, I removed the old code and called into the new algorithm, testing as I went. I also had to add handlers for weighting columns differently and for setting fixed-width columns.

    The real challenge along the way was getting this new code right. That meant I had to figure out what metrics mattered (DataGrid borders, DataGrid column dividers, and platform DPI setting) and figure out what their values were across the three platforms we use (Windows Mobile, Windows CE, and Win32). Initially, I did not know that Windows CE and Windows Mobile had different window metrics, and only realized it when QA found the issues. Figuring out those metrics on Windows CE devices was the trickiest of all, which I’ll document in another blog post. There was one last quirk around resizing the columns - doing so could change the visibility of both horizontal and vertical scrollbars, which would then require a second run of the resizing code.

    So here is the final code, which allowed for both weighted columns and fixed-width columns:

    One thing I cannot show you easily is that this block of code replaced hundreds of lines of duplicate code across tens of files. I can assure you that that was a satisfying checkin to make.

    Finished!

    Magically Gone!

    When it was all done, both the code and the UI were much more beautiful. Users would have a little more screen real estate devoted to actual information, which is important on small mobile screens. And because every single DataGrid used the new code, lazy coders—like me—who copied and pasted to create new screens with DataGrids would pick it up automatically.

    Why I Work At TrackAbout

    A few days ago, my manager at TrackAbout asked me some questions about my career to find out why I work there, how likely I am to leave, and what I would change if I could. For managers that want to keep their employees (and keep them happy), it was a good exercise. Good employees will be getting occasional offers from recruiters and other companies to consider other jobs, so they’ll already be thinking about similar questions. 

    "Why do I work at TrackAbout?" "What would tempt me to leave?" "How can I easily, and quickly, tell off recruiters that obviously have no idea what’s important to me?"

    This is an attempt to answer those questions for myself and for others. If you are trying to get me to leave TrackAbout, consider these thoughts a challenge: How can you beat my current deal? (I think you’ll find it hard to do so). If you are Larry, my manager, and want to keep me happy (Hi Larry!), don’t worry too much. Worrying a little is fine, though. That’s what helps us keep improving.

    I work at TrackAbout because I want to work from home, I like to work at small companies, I love to tackle interesting challenges that involve both product and technical decisions, and it’s nice to be working in the world of real things. 

    The product itself is not something I’m so passionate about that I specifically looked for companies making it. And though I’m coming to appreciate the value it provides to our customers, I’ll never care about asset tracking at the same level as the founders. That’s fine. When it’s my turn to found a company, it will need to be in an area I’m so passionate about that I can’t stop thinking about it. Until then, I’m happy to tackle the interesting problems that TrackAbout presents. 

    What I did specifically look for in coming to TrackAbout was a successful company with a culture of remote work that had interesting challenges to solve. And I’ve learned over the years that there will always be interesting technical challenges to keep me engaged. Because that’s what really makes me happy - solving interesting problems, whether they are big or small. For some reason I seem to be drawn to problems with existing solutions. I’m more of a cleanup guy than a one who has solutions spring fully formed from my head. I love to make code better, and tend to be more successful when doing that than when writing new code in the first place. It’s not that I don’t write new code, it’s that I work best writing it in the service of fixing problems with existing solutions.

    I’ve only been at TrackAbout for a few short months, but I’ve already found tons of ways, large and small, that I’d like to change the code or the product. Some of them I’ve tackled, like making our mobile simulator more closely match what shows up on the device, so it’s easier to test stuff. Or making it possible to debug the unit tests in our mobile solution (that was a really easy code change, the hard part was knowing why they weren’t debugable in the first place). Or moving our code from Subversion into Mercurial. Heck, if I could focus exclusively on our technical debt for the next year, I’d be in heaven.

    The key is that in a product as large and as old as TrackAbout, there are enough problems and challenges to keep me happy and productive for years. Some of those come from being a product used in the real world, rather than the virtual world that information workers live in. Our code has to deal with all kinds of edge cases. In the world of asset tracking things get lost, stolen, broken, etc. Barcodes on assets get dirty and cannot be scanned, legacy systems need to be integrated, and paper is still used for too many things. Finding good ways to write modular, reusable code when it must also handle the messiness of real life is a fun challenge. And it’s one that provides real value to our customers.

    Beyond finding satisfaction in the work itself, TrackAbout also has a great environment that lets me and my family find happiness outside of work as well. We now live back in the west, closer to family. We are able to get back on track with retirement savings because we’re not living on the crowded, expensive east coast. And without a commute, it’s much easier for me to find the time to exercise, to teach my kids math, and to help out at church and in Cub Scouts, while still doing more at work than I’ve been able to do in past jobs.

    Yes, barring drastic changes in my life, that means it’s highly unlikely I’ll move to Silicon Valley, or Seattle, or New York, or Boston for a job. If you, friendly recruiter, want me to, you’ll have a very high bar to jump over. So if you’re willing to make that leap, or if working remotely is supported and encouraged at your company, I might be willing to talk. 

    If not, there are some interesting problems to solve that I need to get back to. 

    Tweaking Teaching Math

    So my wife started homeschooling our older two boys in September after we’d recovered from the summer vacations a little bit. She likes to write about their escapades on her blog, and they’re doing a great job. In deciding to start homeschooling we had planned to have me help by teaching them math, but then decided before our “first day of school” that I was too busy with work, a long commute and Cub Scouts to do that. So for the last two months Kami has been teaching them using a combination of the Saxon Math program and the Khan Academy, and I haven’t been all that involved, other than as a spectator on the sidelines and someone that Kami can bounce her ideas and concerns off of.

    She's not learning math ... yet.
    She’s not learning math … yet.

    Since then, she’s been constantly tweaking things as she goes. She honed how she taught them math over the last couple of months. First, she went back and forth between teaching Reagan and Cade together or separately. Though they can both handle the third grade material fine, she found that trying to teach them together just led to goofing off. They devolved into performing for each other, rather than thinking about and learning the material. So she settled on teaching them separately, but that did take more time. Each day she would take about 45 minutes with each boy and go through the Saxon math lesson for the day, the daily “meeting”, and help them with worksheets, if they needed it. While one boy was doing this with Mom, the other one did some personal worksheets and then got to work on Khan Academy exercises, usually doing whichever ones he wanted to.

    While this generally worked ok, there were two challenges that regularly came up.

    First, the amount of repetition in the Saxon math curriculum was sometimes boring, both for her and for the kids. Though repetition is good, once the kid is comfortable with the material it can be too easy to be fun. Also, the boys like to brag to Dad at dinner about what they learned each day, and there would be weeks where I’d only hear about math once. And my wife wants you to know that it wasn’t because they weren’t doing it!

    Second, the hour and a half of math time was a big chunk of the homeschooling day, and it was regularly interrupted by the younger kids, our three and a half year old Levi and newly-walking Liberty. This wasn’t as bad with other subjects, where it was  usually easier to recover from interruptions, or they didn’t take as long, or they could be done with both boys and sometimes the younger ones too.

    Both of these challenges point to deeper issues, which we discussed occasionally over the last two months. Many of you might have responded to the first concern by saying “Throw out the Saxon Math!” or “Just skip the parts that were repetitive!” Both responses discount the value of repetition in learning. Even if they didn’t, you’d have to consider that Saxon math makes it fairly easy for someone who never felt comfortable in the world of math (my wife) to teach her kids. But skipping parts of the curriculum brought back Kami’s discomfort of trying to figure out what was important to repeat and what wasn’t, without feeling like she had a good idea of what they would need later on.

    The second challenge doesn’t provoke such simplistic responses. You wouldn’t say “Well, you just got to get rid of those two younger kids,” or “Math isn’t that important, just spend less time on it.” Actually, we could spend less time on math, but only if we felt there were another way to teach them that would be as effective. But the fact is that we’ve got four kids. Life has been getting easier with four over time, as the older kids become more capable of helping and the younger ones get better at playing on their own, but that’s also not an issue that is going away in the short term.

    So rather than try to eliminate those challenges, we’re going to sidestep them using an ingenious plan: I’ll teach them math. Ok, so it’s not ingenious. It’s not novel, or newfangled, or noteworthy. But it directly addresses the issues. I’m much more comfortable experimenting with the curriculum than Kami was. I’ll start simply, just doing what Kami was doing, but plan to change things up based on the needs and abilities of the boys. Besides that, I’m curious about how best to teach math, not just in a generic sense (what works for everyone), but also in a specific sense. What works for Reagan? What works for Cade? Why? We will work towards a method that gets them learning, gets them involved, and doesn’t take any longer than our current one.

    This also helps with the challenge of homeschooling with toddlers as well. I’ll be teaching in the evenings, after the younger two go to bed. That means the house is quieter, it helps the boys wind down after their active afternoons, and Kami can deal with the interruptions most days. Another awesome benefit is that she and the kids just got an extra hour and half each day. Some of that extra time can still be spent by the boys doing Khan Academy exercises and math worksheets. But the rest can be used to learn about other stuff, play, relax, run errands, do housework, etc. Basically, it gives Kami more flexibility during the day, which reduces her stress level, and it gives us some more structure to our evenings, which is important when trying to get kids to actually go to bed.

    Compared to this kid, math is easy.
    Compared to this kid, math is easy.

    Of course, it’s not all unicorns and bacon. This new schedule does stretch me a little bit. I’ve sacrificed some of my commute on the train so I can think about the work of teaching math. I tend to be pretty wiped out once the kids go to bed, so I haven’t been able to get as much done on weeknights. And I do still have my other responsibilities: Cub scouts, my job, and finding time to play with my younger two kids. Much to Kami’s chagrin, I haven’t done the dishes much this week. And when I have other responsibilities in the evenings, it can mean we do some schedule juggling. Though Reagan seems to do fine learning math in the evenings, Cade can be a bit more goofy at that time of day. Those challenges won’t just go away, and I’m sure we’ll continue making adjustments to how we balance everyone’s different needs.

    All that said, our first week on the new schedule has gone surprisingly well. Since we started on Monday, I come home to a much happier wife and kids, because they’ve all had more time to do other things. The “perfect homeschooling day” is something rarely seen even among experienced homeschoolers, but Kami’s been really happy with every day this last week. She’s also more relaxed about school, which is good for her and for the kids. The older boys get more one-on-one time with Dad, which they really like, and makes me a feel a little bit better about the long commute.

    There are certainly disadvantages to homeschooling (it can get expensive), but one of the advantages we have it that we can be way more flexible. I like to think of our little homeschool as a small startup amidst the mega-corporations known as public education and private schools. Our market is significantly smaller – just four kids, two of which wouldn’t be served by the megacorps anyway – so we can understand their needs and meet them much more directly. And we can respond more nimbly to those needs as they change. This change in how we teach math is really just one of the larger changes amidst a constant stream of tweaks to the way we teach our kids. Some of them work, some of them don’t, but it’s easy to see the successes and failures and respond quickly.

    The Most Awesome Weekend Release Ever

    "Hey, I have to tell you about the most awesome weekend release ever. It happened last Saturday, and –".

    "Weekend Release?", you ask, incredulous. "Aren’t you supposed to be doing weekday deploys?”

    "Yeah, yeah, I know. We agree, and we’ve taken a break recently as we make changes that involve enough risk that we don’t feel comfortable doing them during the week. But that’s not the point of this story. What I really wanted to tell you is about my crazy night trying to –".

    "But everyone agrees that deploying at night doesn’t make sense, right?”

    "Um, yes. Well, not everyone. But we do, and that’s also on our radar. So, we had to update some test accounts to test some new performance improvements this coming week –".

    "Wait, so your test accounts are on your production servers? Don’t you have a staging environment?"

    "AARGH! Will you just let me tell my story?! And, yes, we know we need to build out a staging environment. It is one of our top priorities at the moment.”

    "Ok, ok, sorry. Go ahead with your story."

    "Thank you."

    So last week we got a FogBugz release ready with some bug fixes and performance improvements. The plan was straightforward: deploy it to our production servers on Saturday night and upgrade our test accounts so that QA could bang on it for a week. Barring release blockers, we could upgrade everyone the following Saturday.

    On Saturday, while I was out running errands, it started to snow. I love the snow, and thought, “That’s cool, my boys will enjoy playing in the snow and it should all be melted by Halloween when we go trick-or-treating.” Those tweets and that thought apparently jinxed me. Once I was done with my errands in the early afternoon, I drove home through near-blizzard conditions only to get a text from my wife that the power had gone out. She also mentioned that a bunch of trees were down on our street. As I got off the freeway and drove down the hill towards home, I was following a bus that was swerving all over the road to avoid low-hanging branches and a tree that had fallen across half the road.

    Our releases don't usually look like this
    Our releases don’t usually look like this

    After a few hours it became apparent that we wouldn’t be getting power back in time for me to login to our VPN and take care of the release. So I started contacting some friends from church to find someone who had power and wifi and the willingness to put me up for an hour or so at 10pm. Brandon, Ian, and John all said they could, but Brandon was first, so I let our sysadmins know the release was still on (they like to be alerted to these things, you know). It got dark at about 7, and we put the kids down early, wrapping our one-year-old Liberty up in a snowsuit so she’d be warm enough all night. At 9:30, I packed up my laptop and went out to make the 15 minute drive to Brandon’s home.

    As I left I had two roads I could take: the freeway, which involved some pretty steep hills; or driving through town (three towns, actually) along a road with no significant hills. Because the snow was still coming down hard and sticking pretty well I decided to avoid the hills. That was a mistake. The first few minutes were pretty uneventful, but it was fun to drive through downtown Berkeley Heights and into New Providence in complete darkness. No streetlights, no traffic lights, no homes lit up, no businesses, no grocery stores. Ninety percent of the other vehicles on the road were snow plows (doing a great job, by the way).

    The drive through town was also along a heavily tree-lined road, which made the darkness creepy in a “Halloween is cool” kind of way. But as I travelled this road it became clear that I had taken the wrong road. Trees had fallen across the road at pretty regular intervals. The snow plows (or kind citizens) had cleared a single lane path where these had happened, though it was often just barely wide enough for my compact Honda Civic. I went ahead and drove past a few road closed signs and one unmanned roadblock.

    Not in New Jersey, but you get the picture
    Not in New Jersey, but you get the picture

    Soon after that I realized why they were closed: The downed power lines. Most didn’t block the road at all, but one was just hanging directly in my path and scraped over my car before I had time to notice it and react. That got my adrenaline running. At another point I got to play chicken with a snow plow that was driving the wrong way in my lane.

    I finally made it into Summit, where there was power in some parts of town, and pulled into Brandon’s driveway, but of course, the car got stuck as soon as I left the road, so I was blocking the exit for both him and his neighbors. He and I puzzled over that for a couple seconds, then realized his neighbors wouldn’t be crazy enough to go driving in the storm. He set me up in his basement, his wife Becki offered me something to drink, and they went back to watching TV.

    After a little over an hour the release was done, and it was time to head back home. This time, I’d learned my lesson. Throughout my drive to Brandon’s the roads had been clear, thanks to the plows, so I knew that it would be an easy ride home on the freeway.

    And it was.

    Until I got off the freeway.

    I took an exit that would avoid the steepest hill, just in case. Turns out that was a good choice, as I found out Sunday that it was completely blocked by fallen trees. It did mean I had a bit further to drive through town, maybe a mile and half instead of just a mile.

    And that’s when the fun really started.

    The off ramp had a tree across it that had obviously been chain-sawed through in the last couple hours. Just around a bend, I came across a cop car with lights flashing, blocking the road I would have taken. So I decided to risk a side road to get around the blocked off section. Now I really was in rural New Jersey, and now I got to drive over some branches that weren’t quite big enough to block my Civic. I finally made it back to the road leading home (less than a mile to go!), but just a couple blocks further ran into a road block that wasn’t one I could drive around. Power lines and police tape were draped across the road right at car level.

    So it was off into side streets again, on the very steep hills I had wanted to avoid. I still wasn’t slipping around on the snow much, but had to take a couple detours anyway (trees across the road). Finally I started heading back down the hill.

    The end was in sight!

    So, of course, I got stuck.

    One more downed tree. At the worst spot. The snow was deep enough that turning around wasn’t an option. Backing up didn’t work. I’d finally gotten my Civic stuck in the snow only about half a mile from home.

    Fortunately, I’d left home with a flashlight (so I could see in the garage). I broke that out, bundled up, and started the hike home. My flashlight was pretty weak compared to the car headlights, though, so I just barely avoided walking into a downed power line on the hike down the hill. Once down the hill, I got to pass a powwow of power crews and cop cars at the last intersection before my home where they guided me past more downed lines. From there on out I walked down the middle of the street in complete darkness illuminated only by my flashlight.

    Once home, after taking over an hour to make the 15 minute drive, I jumped in the shower to warm my toes back up before getting under the heaps of blankets my wife had piled on our bed.

    Follow-up (Monday): The power is still out, and the city is telling us they can’t even give us an estimate for when we’ll get it back. On Sunday we drove for a few hours into southern New Jersey to get the last remaining generator at the only Home Depot who hadn’t sold out the first hour they were open. We’re now using it to power our fridge, freezer, and DSL modem, so I can work from home, since the trains into New York aren’t running. Kami and the kids are doing homeschool in the dark, which the kids think is just awesome. I’m writing this post in a room that is about 60°F, while listening to the dulcet roar of our generator right outside my window.

    Follow-up (Friday): The power finally came back on Thursday evening at about 5pm. We spent the evening cleaning up our home, doing laundry, and generally trying to get our lives back in order. And guess what? I’ve got another release to do on Saturday!

    What ever happened to all that talk about build and release management?

    Well, I collected them into a series and added a report card over on the Fog Creek blog. I plan to continue adding to the series going forward.

    "Have you been flossing regularly?"

    "Have you been flossing regularly?" - Your Dentist

    Don’t you hate it when you go to the dentist and, while the dentist is digging around in your mouth, he asks if you’ve been flossing?

    "Huh-mhqhg," you respond, hoping that all the fingers in your mouth made your negative answer sound like a "Yes, sir"  on the way out. But no; dentists (and their assistants) have an uncanny ability to understand the most garbled language correctly. Either that, or they’re just really good at reading the guilty look in your eyes.

    And then the kicker comes:

    "Yeah, I could tell by the profuse bleeding that happens when I start jabbing your gums with this sharp metal instrument of torture." Ok, so the dentist doesn’t actually say that, but hey, if he can hear what he wants in my garbled mumblings, I should be able to hear his words the way I want to, right?

    Anyway, let’s assume he’s probably right, and that regular flossing would actually eliminate the need for blood transfusions after each visit to replace all that was lost. From the scattered times in my life when I’ve been able to keep up flossing for a week or two, I know that after the first few days the bleeding does go down significantly. So maybe the dentist has a point, and not just at the end of those instruments of torture.

    But how do you go from a lifetime of flossing for a week or two after each dentist visit and then forgetting about it completely until the next one, to actually making it something that you do each day without thinking much about it?

    The trick is building a habit. No, it’s not easy. Yes, it will take time. If you need help, I recommend following these basic steps. There is no easy way to build a good habit, but you can make it easier. Choose a trigger, make it public, keep it simple, build anticipation, take your time, report to others.

    I went through these steps over the last month or so, and now I’m flossing daily, even under the dental wire that is the last, permanent reminder that I had braces as a kid. Yeah, I was even more nerdy back then.

    Anyway, I’m writing up this post because my next dentist visit isn’t coming for a few months. But if I tell my small corner of the internet how that visit will go it will help me stick to my habit until then.

    So now, the next time my dentist asks, “Have you been flossing regularly?”, I’ll have a better answer for him:

    "Huh-mhqhg"

    Ok, you might not be able to tell the difference, but somehow the dentist will see the gleam in my eye, and know it is not the glint of guilt he saw at my last visit.