BDD, the A-Ha Moment

I’ve been professionally writing software for almost six years now, aware that testing was something that ought to be done as part of one’s job, but it was only in the past couple of weeks that I really had a solid moment really bringing home to me why Behaviour Driven Development really works.

After a sharp awakening that I just wasn’t doing my job properly (I had checked in some code that only sort-of worked, and had no automated tests verifying to what extent it was broken), I was picking up a new task.  This piece of work was something I had done before.  Not something like it, this exact thing, in a quick hack demonstration model that we threw away, with this being one of the changes we decided to keep.  I knew what to do – I had to delete a bunch of code and it would still work.  I was removing a decision from the user which wasn’t necessary with the improved workflow of the new version.

Partly because I wanted to improve my work ethic to what constitutes basic professionalism in my team, but mainly I was a little angry with myself and exasperated with the person who had shown me in stark reality how I was falling short of my job description.  He is the Tech Lead on the project, and I decided to challenge him – I didn’t know how I could possibly test for a lack of a user interaction without some horribly farcical test.  So I asked his advice on how one might go about TDD’ing a removal of code.  To his credit, after a couple of minutes’ thought, he came back to me and said that the user interaction missing wasn’t something we should be testing, but the decision was still being made – in code.  He and I both knew that our previous version was a race to get something out the door and we had fallen into some of the classic traps, dumping logic directly in with the UI code in the name of speed.  UI code is notorious for wanting to do extra things, like starting an event loop, which is not conducive to automated testing.  We therefore did not have such things around this particular logic.

The Tech Lead said that the best thing to do was to wrap some tests around the logic that would remain; we were removing one of the four possible outcomes and the other three still needed to work.  Then, while we were looking at the code to see how to get started, it quickly became obvious that without the decision dialog in the way, we were free to move this logic to a more appropriate layer.  But where exactly?

I started writing some tests to see that this decision would be made and found it much more difficult than it should have been.  This made me revisit the decision of where I thought it should end up and moved my testing to a different layer.  This too was a dead end.  I think it was only three, but it might have been more attempts at fitting it at the right level.  Still, these were two minute forays at most.  Each test setup showing very quickly the problems at each layer and they were all completely revertable because I hadn’t changed any real code yet!  Then I found it.  The layer at which it all fitted.  Writing the tests became easy and I could do it piece-meal as I slowly removed unwanted code and then moved the method to its rightful position.

I already knew the code worked – we had manually tested it quite thoroughly, so actually asserting correctness wasn’t even at the top of my list of objectives for writing a test.  Although I wasn’t conscious of it at the time, I was writing the test purely so it would guide my design.  This is why the title of the post says BDD, but I referred to TDD earlier.  BDD is the aim to remove the conscious thought of testing for correctness and moving towards asserting that the functionality exists.  In short, it’s a good way of doing TDD properly.

In the end, I spent around an hour in a single session doing something quite straight-forward, which would have taken a similar amount of time had I not let the testing guide my design.  Except that I left the code in a better state, architecturally and didn’t revisit it because I missed something with my human-based testing.

I mention this as my a-ha moment, because it really solidified for me what BDD was all about.  It’s not about having those green ticks appear on your screen, it’s about really becoming the first client of the service you’re writing.  The practice run before unleashing it on real code.  And subsequently I have found the mental battle to test-drive my design (was that pun also intended by the creators of TDD?) has completely melted away.

 

How Agile Failed Us

Well, actually, Agile wasn’t to blame, it’s an obviously internet search tuned headline.  We broke it, and I’ll attempt to show how:

My team is trying something different.  For a few years now, we/they have been doing Scrum.  As in all successful agile implementations, it’s been tweaked to fit our workflow and calling it Scrum now is more a habit than a true description.  And it is working for us …mostly.  We’ve just embarked on something new and I thought I’d give a bit of a description of where we’ve come from and where we’re going.  Hopefully it’ll resonate with anyone who isn’t on my team reading this – especially the bits that I now see as obviously brain-dead!

Read more of this post

The Serial Killer Tooltip

Firstly, having seen in my statistics a search term used to find my last post, I feel I should point out how I came upon the name for my Serial Port Terminal Application.  I was poking fun at it being a “Killer App” (because really, if there was any software which proves Serial Comms as a technology, it was written years ago).  Combine that with Serial and you have a terrible pun, which seems to be one of the most important things of Free and Open Source Software.  This is not a post about the other type of serial killer.  Or indeed, a full time one.

tooltip_smudge

For such a small feature of my program, the tooltip which shows up the timestamp of the time data was sent or received has caused me more consternation that probably the rest of the program put together.  I had to touch each of the three layers of my model-view-controller, with the smallest modifications going to the controller, which is odd – in general, the bulk of the functionality of an app should live in the controller.  So here’s how I did it…

After thinking about the UI briefly, taking my existing GTK+ knowledge, I decided it was going to be best to start with the model.  When I first started writing this app, I had imagined a feature like this and broke the YAGNI rule.  I had previously included a timestamp in the data I was collecting about the data to-ing and fro-ing.  My previous code was exhibiting a massive performance problem with the linked-list I was using, in that it was grinding to a halt whenever I clicked the switch to hex mode button.  I therefore decided that if I wanted to search this thing with decent performance, I would need to step my storage up a notch.  For this, I chose to use an in-memory sqlite database.  At work I’m already getting into a SQL head space, I’ve worked with sqlite before, and of course, its license terms make life easy for including in my liberally-licensed program.

Create a table, drop in the data as it arrives and pull it out in the most straight-forward manner imaginable.  I was worried that it would be a performance bottleneck, but I wanted to let the database prove itself otherwise.  Turns out it was good enough.  No reason to get fancy until the simplest thing isn’t possible.

Now for the interesting bit.  How do I pop up an arbitrary tooltip on the same widget, based on the text underneath the mouse?  This took a bit of doing.

First things first, we need to know the position of the mouse.  I attached to the MotionNotify event on the TextView, so now every time the mouse moves I store the X,Y coordinates for later.  As a proof-of-concept, I set the tooltip text during the MotionNotify event and voila, I had an arbitrary tooltip based on the mouse cursor position.  It was starting to be shaped like the actual feature.  I didn’t really want to store it myself, and I was kinda hoping that I would find an event with the mouse X and Y coordinates attached, but I’m not that lucky.

After several mis-fires and some failed attempts at googling, I eventually came upon the QueryToolTip event.  I think I tried this one pretty early on, but it turns out you need to set “HasToolTip = True” on the widget for which you’re hoping it will be called and you need to set the args.RetVal = True or nothing will show up.  Tricksy Hobbitses.

_textView.HasTooltip = true;
_textView.QueryTooltip += _textView_QueryTooltip;
...
args.Tooltip.Text = string.Format("X:{0},Y:{1}", mouseX, mouseY);
args.RetVal = true;

The next problem is knowing exactly what piece of text is underneath the mouse cursor.  We know the window co-ordinates, but that doesn’t help.  For a second there, I thought I was going to have to do some stupid math based on the font size, and the buffer lines, which would almost certainly turn out to be wrong or not accurate enough in all but the simplest of cases.  Luckily, it’s a common enough thing for GTK+ to have support for it built it:

int x, y;
_textView.WindowToBufferCoords(TextWindowType.Text, (int) mouseX, (int) mouseY, out x, out y);
var bufferIndex = _textView.GetIterAtLocation(x, y).Offset;

So that gives us the buffer index – that’s a solid start!  But it only tells us the character, not the timestamp associated with it.  So it’s time to revisit the database.  Every time we add data, we now need to know whereabouts in the buffer it sits.  So the simplest thing possible is to keep a running total of the buffer and add the length of the data whenever we dump data in the database.

It is now time to explain what I meant by the comment “My previous code was exhibiting a massive performance problem”.  When I started attempting to pull the timestamps out, I was either getting results that were wildly wrong or exceptions were being thrown with no clear reason.  The database decision really came into its own at that point, as I was able to just save the whole thing to a file and dissect it.  As soon as I opened it up and did a SELECT *, it became clear to me that I was storing waaay too much data.  I was storing the whole 256 byte buffer, regardless of how much data was actually sent.  I was storing the length of the buffer, rather than passing up the number of bytes I actually received.  Whoops.  Turns out it wasn’t a performance problem or a hex conversion problem or any of the other things I had considered.  I was just doing something dumb.  A quick fix and my performance problem disappeared, along with the conversion issues.  I was quite excited about that!  To the point where I re-instated the sent / received tag and the performance was still acceptable.  Great Success!!

So to round the feature out, it was a completely obvious SQL query and performance appears good enough.

DateTime result = DateTime.MinValue;
using(SqliteCommand dbcmd = _dbConnection.CreateCommand())
{
   dbcmd.CommandText = string.Format("SELECT time FROM SerialData WHERE " +
                                     "bufferIndexStart <= '{0}' AND bufferIndexEnd >= '{0}';", bufferIndex);
   var reader = dbcmd.ExecuteReader(System.Data.CommandBehavior.SingleResult);
   if (reader.Read())
   {
      result = DateTime.FromBinary(reader.GetInt64(0)).ToLocalTime();
      reader.Close();
   }
}

In the end, it was actually really obvious, but it took a bunch of google mis-fires to get there.  I wanted to add a scathing remark about the GTK documentation, but when I search there now, I find exactly the information I needed, including the caveats I mentioned above.  I still want to blame them somehow, but I haven’t figured out how I can do that successfully.

The Serial Killer

A little over a year ago, I decided to write my own Serial Terminal.  Most people’s reaction to this is “why?”  Serial comms is a technology well on its way out, and what they see as the critical hit, a bunch of terminal programs already exist.  Many of which are even open source.  So why would I bother?

Read more of this post

Software is like Cooking

In a previous post I mentioned that software development is a creative process.  In this post, I’m going to expand upon that a little, add some metaphors and later, if I’m feeling wild, I might even add a simile or two!  I’m sure F. Scott Fitzgerald is turning in his grave, given that I knowingly “laughed at my own joke” then.

 Coding is like cooking.  It is important that the flavours blend together to look and taste like they are part of a single entity.   It’s not really ideal to find a whole lemon buried inside a cake.  Or a stew.  Sadly, I see this kind of thing all too often in coding.

The recipe calls for a hint of lemon, so Engineers being Engineers, think “if a hint of lemon is a good idea, then think how awesome it would be if we had a whole fruit in there, from which we could squeeze the right amount of lemon juice from whenever we wanted.”  At which point there is now a lemon bolted on the side of your mixer and you find that when you try and bake a different variety of cake, the machinery forces you to have lemon in there.  Or more likely, it is a multi-flavoured juice dispenser, which is so heavy the mixer can barely stand up on its own any more.  And it leaks.

Which doesn’t at all lead me to my next point, so I’ll just jump to it awkwardly.  Something I find myself doing quite a bit is cooking blind.  Which is ironic, because sight is the main sense I actually employ when doing this.  What I mean by that, is that I don’t taste my food before serving.  It’s a bit inexact, and I admit, also somewhat hit and miss.  And largely, when it’s a hit, it’s due more to experience of knowing the right colours, textures and combinations of flavours that work.
So I guess what I’m saying with this post is dressed up prettily, but blindingly obvious.  Experience mitigates some of the worst sins, but if watching altogether far too many cooking shows on telly has taught me anything at all, it’s that life is infinitely better if one tastes food before it is served.