Blog


Welcome to our blog. We hope that these pages provide an insight into us, our products and how we develop them. Please feel free to write to us if you have anything to add to any of the posts here.

   Current Entries | Archives   RSS

 


Post-conference snowballs

Monday 7th April, 2008


Well, the conference is finally over and we're packing to go home. It feels like the week has flown by - it's hard to believe we've been living out of the same hotel room for 6 days now!

We would like to take this opportunity to extend our thanks to everyone involved in running the ACCU Conference this year, and of course to everyone who took the time to come and see us at our stand. It was great to meet you all, and I'm sure we we will see you again this time next year.

I've uploaded our photos from the week to Flickr:

Our photos from this year's conference

Just to add to the fun, it snowed during the night:

It's snowing!
It's snowing!

I couldn't resist the temptation to go out and take a few pictures while we were supposed to be packing...

Posted by Anna at 11:20am | Get Permalink


The Last Day...

Saturday 5th April, 2008


Seven Deadly Sins of Debugging

(Roger Orr)

Oh well, it's only a bug
Oh well, it's only a bug.

Roger is a member of the ISO C++ Standards Committee, and a specialist in the field of debugging. Having attended one of his sessions last year, we had a pretty good idea that this keynote would be both entertaining and informative.

Roger started by stating the obvious - that the best bugs are those which do not occur, and that by learning to apply techniques to reduce problems up front (e.g. good design, unit testing, code analysis, defensive programming etc.) we can reduce the risk of bugs occurring. None of this should be news to anyone attending the conference.

After using such techniques to remove the obvious bugs, we are of course left with everything else. Debugging is quite obviously here to stay.

It has been stated that better programmers can be 20 (?) times better at finding bugs, spend less time fixing them and put fewer new bugs in by doing so. The obvious question this then raises is "Why is there such a differential, and what prevents so many of us from learning?"

Enter the Seven Deadly Sins of Debugging:

Inattention can lead us to not look closely enough at what we are doing, miss the obvious patterns ("what are the real symptoms of the bug?"), and repeat the same mistakes again and again.

Details are very important in debugging - logfiles, configuration information etc. can all yield crucial information, so the more information which can be automatically generated the better. Collecting this information up-front can also save you from having to generate the information you need while actually investigating the bug.

Debugging also of course requires very focused concentration, so taking adequate breaks is essential. There is nothing less productive than staring at a debugger with a deadlocked or clueless mind -and yet all too often developers attempt to debug in exactly that way.

Keeping checklists (e.g. our own lint configuration triage procedure) can also help greatly, since it is all to easy to miss something obvious when you are under pressure to fix a critical bug. Similarly, the insight afforded by a second pair of eyes can also help, so we should never be afraid to ask for help.

The corresponding virtue is of course observation, which leads us to ask interesting questions such as:

Pride can lead to higher quality code in the first place, but when mis-applied it can also unfortunately:

The opposite of pride is humility. Questions such as "I could be wrong", "What have I missed?" and "Who can I ask, and how?" can lead to the insights you need to fix that troublesome bug.

Naivety tends to prevent us from learning from our mistakes, and lead us to make mistaken assumptions about where the problem lies. On the plus side, the simplest fix for a bug is likely to be the right one.

The corresponding virtue is wisdom. - e.g. standing back to reflect on how the bug happened, why, and how we can prevent it happening again.

Anger needs no introduction. It can of course cloud our judgement, cause us to miss obvious clues and deny the implications of the evidence we have.

Sloth can lead us to try to avoid "unnecessary" work while we are writing code in the first place. When the resultant bug surfaces, we poke around in the debugger in vain. It also results in ignorance - a failure to read around the subject or fully understand the technology. Sadly, this is all too common.

The corresponding virtue is diligence - by learning enough about the system to understand how it behaves, we dramatically increase our chances of identifying the cause of bugs in a timely manner.

Diligence also leads to other positive effects - for example spending time upfront to save even more time later. By writing scripts, adding logging etc. we can often make a real difference when investigating a bug. Another often overlooked technique is to make error codes unique enough to look up in a search engine ("error 3" really doesn't help anyone).

Blame - As the saying goes, a bad workman blames his:

Blame of course, doesn't fix the problem, but may lose you some allies. At the end of the day, even if you can blame another system, you still have a bug to fix.

The corresponding virtue is quite obviously responsibility.

Vagueness is fatal to effective fault finding. "What exactly is the bug?" and fixing a bug, but not "the" bug can both intervene to mess things up.

However, precision greatly improves bug hunting. If something seems to be baking repeatedly, focusing on what you are doing, making error messages umore seful etc. can all help.

The bad news is that debugging is hard, and is not likely to get any easier:

The more effective we can be at preventing, identifying and fixing bugs the less time we will spend unnecessarily in front of the debugger.

Researching a Problem and getting meaningful results

(Alan Lenton)

Researching a Problem and Getting Meaningful Results - Alan Lenton
Researching a Problem and Getting Meaningful Results - Alan Lenton.

If you're on an obviously failing project, how do you get management to listen?

That was the question posed by this session. One obvious answer is of course to quantify it in a form they undestand and will therefore listen to. This actually dovetails rather closely with Tom Gilb's EVO session earlier this week, albeit from a different perspective.

Fortunately, with a bit of work you can quantify just about anything (technical debt anyone?). There is however a danger that by quantifying things doing so becomes an end in itself, rather than a tool to solve a problem. Once you quantify a problem, the presentation method of choice for managers is of course the spreadsheet (which also provide a simple way to present the results graphically if appropriate).

A financial cost estimate is of course key for this target audience. Once you have an idea of how long an issue would reasonably take to fix, it is straightforward to calculate this based on time to fix and hourly cost including (or excluding, for maximum impact when you add them in later!) overheads.

If you are planning to make a financial case it is also worth remembering that capital costs and labour costs do not always compare directly, since the former can (certainly in the UK) have an impact of profit margins but the latter will not (you find this sort of stuff out when you set up your own company, believe me!).

A key question is of course how to quantify a failing project, rather than just one part which can be fixed? The obvious metric is "how much is the company spending per month on this project?".


C++ 2009 in 90 minutes

(Alistair Meredith - Codegear)

Alistair Meredith is a member of the C++ Standards Committee, and this session was a lightning tour of the changes in C++ 2009 (otherwise known as C++ 0x; Alistair stated that they are aiming for a 2009 release) - the first full C++ standard release since C++ 1998. As such, it is a major update.

Alistair first of all described the features which will (unfortunately) be missing from this release: e.g. library features beyond TR1, C++ modules, math binding and garbage collection have been deferred until TR2 (due in 2012?) or will be incorporated into separate standards.

The final release candidate of the standard should be out in September 2008 - which would mean that all comments will be received by January.

So what's new? In short:

Some of the most fundamental changes are (as is to be expected) in the area of concurrency. Notably, C++ 2009 will finally define a modern memory model, which should lead to less uncertainty in defining what is and is not acceptable in multi-threaded code. The biggest impact of this change is of course in defining which fundamental assumptions can and cannot be made by optimisers, so it should be largely transparent for most.

Other changes in this area include the addition of defined atomic operations (there is a new atomic keyword), intrinsic threads and locks and (possibly) futures. Thread pool support has been deferred to TR2, which is a shame but understandable given the volume of change already proposed.

Alistair talked at length and in detail about the new and changed language features, but did not have time to discuss the corresponding library changes. I can't even begin to do everything justice, so here's a long bulletted list of the changes he described:

And that's just in the compiler...!

The State of the Practice

(Tom Gilb, Hubert Matthews, Russell Winder, Peter Sommerlad and James Copelien)

The State of the Practice
The State of the Practice.

The subject of this panel was in effect: "Are we barking up the right tree? So many developers have no idea of basic good practice. Discuss.."

While I can't even begin to do the ensuing discussion justice, the responses of the panel members to the opening question give an interesting insight into the discussion :

Tom Gilb: "There is not enough focus on delivering value to our stakeholders"

Hubert Matthews: "We have forgotten the human element and reward structures reflect that"

Russell Winder: "Polarisation. There is (unfortunately) a lot of dross out there."

Peter Sommerlad: "The state of practice is partly a reflection of past failure in academia. It is now too easy for lay people to produce badly written software."

James Copelien: "This is a wicked problem without clear cause and effect."

I gave up taking notes when the discussion wandered into the "bottomless pit" issue of professional certification...

Posted by Anna at 9:45pm | Get Permalink


Is it Friday already?

Friday 4th April, 2008


May You Live in Interesting Times

(Andrei Alexandrescu)

This session was a humourous illustration of the ideas and issues involved in the C++ 0x language design, and how tricky it can be to design a modern language.

Andrei very humourously illustrated that in such a large language there are so many domains, that no one person is likely to be an expert in all - and C++ is such a big language that this is almost inevitable. Even the most simple promlem - writing an identity() function which returns its value - is not as simple as it seems in C++ if all use cases are considered.

He also described some of the more notable new language features in C++ 0x:

The bottom line is that if you work with C++ code and haven't taken a look at what is coming in C++ 0x, you probably should.

C++ Refactoring

(Peter Sommerlad)

C++ Refactoring and TDD in Eclipse/CDT
C++ Refactoring and TDD in Eclipse/CDT.

This session focused on TDD and C++ refactoring in Eclipse. Peter's group at the [Institute for Software](Institute for Software(http://ifs.hsr.ch) has produced some very interesting C++ refactoring and unit testing plug-ins for Eclipse CDT. We have been talking to Peter about static analysis tools for Eclipse during the week, so this was a great chance to see the tools his group have developed in action.

Peter gave a brief introduction to TDD for anyone who wasn't too familiar with it, before firing up Eclipse to demo the CUTE plug-in.

At first glance, the plug-in seems similar in concept to TestDriven.NET, but with a better user interface. For example, it has a quite comprehensive toolwindow (a little reminiscent of the NUnit GUI) which shows not only the tests but the console output from the tests themselves.

One very nice feature of the CUTE plug-in is that it will generate stub tests and test suites within the IDE automatically.

Peter spent most of the session going through a couple of examples using the CUTE plug-in. Unfortunately we didn't have time to look at the refactoring plug-in in depth, but what we did see certainly looked quite comprehensive - possibly more so than that provided for Visual Studio by Visual Assist.

Hacked

I'd just sat down for one of the early afternoon sessions when Beth came and grabbed me saying "we have a problem". It turned out that while she was in the sponsors area one of the Perforce representatives came up to her to tell her that our site had been defaced. Sure enough, when we looked we found that all of the index pages had been replaced. As several other sites on the same server had been defaced in the same way it looks like it was a "scoreboard" attack on the host's server rather than a directed attack on a specific site. Still, it's a bit annoying to have to spend time repairing things right now.

As a result of the panic, I missed the afternoon sessions. Sigh.

I did however have a very interesting discussion with Peter Hammond and (later) Tom Gilb about financially quantifying technical debt, and what effect seeing a financial cost exposed in code analysis tools might have on technical management. There is a significant volume of discussion on this subject already (just search the web and you'll find it all easily enough), but it is such a subjective problem that I suspect an authoritive formula may be somewhat hard to derive.

We ate out in the evening (amid a slew of bad elephant jokes...) with Ralph and Phil at the Plough, having failed to get into the Trout (where we went last year; we should have booked, really!).

Posted by Anna at 11:44pm | Get Permalink


A Lakos induced day off

Thursday 3rd April, 2008


I've been Lakosed...
I've been Lakosed...

Today's sessions were pretty much a washout for me after the experiences (is that the right word?) of last night. Although I did attend David Vest's "Starting and running a MicroISV" session (nothing new there for me, but that's probably a good thing given that we've been going a while now) and Russell Winder's very interesting "Them Threads, Them Threads, Them Useless Threads" sessions I'm afraid I was in no state to take notes the frenetic way I normally do.

That said, I did enjoy both sessions, and was able to function enough to chat to people reasonably coherantly!

Normal service (as they say) resumes tomorrow.

Posted by Anna at 11:00pm | Get Permalink


This year's fun begins

Wednesday 2nd April, 2008


This morning was a bit of a rush as we did some last minute fine tuning on the "Death by Powerpoint" rolling presentation we will be using on our stand at the sponsors reception this evening.

When we finally arrived in the conference lobby we immediately bumped into Kevlin Henney (who thanked us for blogging one of his quotes from last year) and Alan Lenton. The general consensus was that the "Bad Bug Day" motif is a fun one, so we may just have to get some more teeshirts printed.

At lunchtime we set up our stand ready for the Sponsor's reception in the evening:

Our stand at ACCU 2008
Our stand at ACCU 2008.

The evening was every bit as busy as you'd expect given the free wine:

Free wine is a powerful incentive...
Free wine is a powerful incentive...

We had a steady flow of people coming to see us demonstrate Visual Lint:

Beth explaining our product to a couple of interested delegates.

After the reception a whole bunch of us headed out into town in appropriately disorganised fashion to a couple of Cantonese and Thai restaurants. Amid much hilarity and far too much Tiger beer we headed back in groups to the hotel bar where a significant number of us proceeded to become thoroughly Lakosed. * I finally left the bar at 3:30am...:wtf:

<p class="footnote>* A state of advanced inebriation caused by being repeatedly being bought drinks (irrespective of protestations to the contrary) by John Lakos, author of Large Scale C++ Software Design. Despite being a recognised hazard of being in the bar at the ACCU Conference, every so often even the most wary of us can be caught out occasionally...

Anyway, on to today's sessions:

Value Delivery for Agile Environments

(Tom Gilb)

The conference noticeboard
The conference noticeboard.

The thrust of this session was that although agile methods are better at organising development tasks than conventional methods, they do not really focus on the needs of stakeholders. For example, they do not provide guidance on the business value of each potential task.

By contrast, Evolutionary Project Management (EVO) is more focused on business goals than tasks and iterations/sprints. In fact, an approach such as EVO can be used together with agile approaches (for example) Scrum to great effect.

EVO is based on continuous measurement and reassessment of business metrics, stakeholder requirements, budgets, goals, impact estimation (e.g. via impact estimation tables), estimating, planning and tracking. Key principles include:

My initial reaction was that EVO in its pure form may not be entirely suitable for a small organisation due to the sheer amount of analysis required; however this is no different from the situation with any process/methodology - Scrum (for example) probably doesn't work particularly well in a small organisation either. The lesson is of course to take the good bits, and leave those which bring in more overhead than you need. That said, Tom apparently has a case study involving a 3 person team which isn't too far removed from the micro-ISV world we're familiar with.

Either way, EVO is definitely an approach professional developers and project managers should be aware of. The majority will of course probably carry on in blissful Waterfall-esque ignorance as always...

Slides from this session

Santa Claus and Other Methodologies

(Gail Ollis)

Gail is an active member of the ACCU South-Coast group, and a very entertaining and thought provoking speaker.

"I don't believe in methodologies"

Methodology is strictly the study of methods etc rather than their application, but the use of the name in conjunction with development processes can (unfortunately) lend them "instant" credibility in the eyes of some - the "follow this and everything will be perfect" delusion. The real world is of course not like that - any "process" is only going to work well if you buy into it and tailor it to your own needs. If you follow a process blindly, it will almost certainly fail you.

Gail followed her introduction with a brief historical foray into a long dead software development "methodology" called RTSAD, and a project development process called Goal Directed Project Development (GDPM), outlining the failures of both when applied within an organisation to illustrate her point.

New methodologies offer new buzzwords, which can lead companies to adopt them for the wrong reasons. Particular groups of people seem to be most susceptible to this:

(the first and last are often managers; the second and third are often developers).

At the end of the day, although these are people problems - and not process problems - persuading people to change the way they work is all too often exceptionally hard.

The lesson is of course not to look at the solution (e.g. "adopting <Methodology X> will solve all our problems"), but at the real problem. Once the problem has been identified, potential solutions can be visualised and investigated.

Some questions we could (for example) ask about a potential solution include:

As ever, there is (unfortunately) no magic bullet.

Robots Everywhere

(Bernhard Merkle)

We met Bernhard for the first time last year when he ran a very interesting session on architectural analysis tools. This year he has turned his hand to looking at the world of robotics.

Bernhard started the session with a fascinating illustrated summary of the state of the art today, including competitive events such as RoboCup (robot football) and the DARPA Grand Challenge (autonomous vehicles).

Concurrency and (naturally) functional programming are of course fundamental to robotics. Although there are a number of established players in this field, Microsoft are now targeting the emerging home and educational markets with Microsoft Robotics Studio (MSRS) and the parallel computing initiative.

Microsoft apparently learned that typically 80% of the development time on robotics project is currently being spent on developing limited use frameworks, and the MSRS effort is in part aimed at generalising these sorts of efforts, although a secondary aim is obviously to support adoption of the .NET Framework within robotics applications. MSRS has a heavily concurrent and distributed architecture, which Bernhard spent some time describing in depth. It was also interesting to see that C# was being used rather than the (I would have thought) more well suited F# functional language.

All in all this is a fascinating subject, and no doubt one which will become more and more prominent.

A Tale of 2 Systems

(Pete Goodliffe)

"A Tale of 2 Systems"
As the inventor of "Alphabetti Custard", Pete Goodliffe needed no introduction.

This sesssion looked humourously at the long term impact of design on a software system, using two real examples. Pete's assertion is that the quality of a project is determined mostly by the quality of its design.

Good designs should of course be:

Pete gave examples of two similar systems he had worked on to illustrate these principles:

"The Messy Metropolis"

This was a spaghettified mess, the code for which had grown "organically" over time with very little thought. Pete rather appropriately illustrated it with a picture of a turd! We've all seen systems like this, so I'm sure I don't need to elaborate further (but please look at the slides if you really do feel the need to see the turd...).

Eventually, such systems grind to a halt and effectively force a complete rewrite - whereupon the cycle can all too often repeat, and at huge cost.

Design problems can of course be caused by company culture (e.g. empire building, not giving developers time to rectify smells in the design) and poor development processes with insufficient thought given to design issues.

Pete ably described the problems this particular projects caused within the company at every level from support to sales, marketing, customer support and manufacturing. It (not surprisingly) eventually ended up in a costly rewrite - which is of course a high risk proposition in its own right.

"Design Town"

This project was different from the outset. The project was run by a small, flat team with a clear roadmap and following a defined process (XP in this case).

Perhaps crucially, the design was limited to that which was sufficient to meet the requirements (a key agile principle, in my view) without attempting to include detailed provision for possible future requirements.

In this system, the design made it far easier to add new functionality. It was straightforward to locate where specific functionality lay, and new functionality gravitated naturally to the right place. Of course, bugs were also easier to locate and fix.

Most importantly, the software developers took responsibility for the design. This last point is (in my view) fundamentally important - some developers I meet are sadly lacking in the essential motivation to do this.

So, what lessons can be learnt from these two projects?

How then can we improve a bad design?

Slides from this session

Posted by Anna at 8:31pm | Get Permalink


A Functional Workout

Tuesday 1st April, 2008


Programming Erlang book cover
The cover of Joe Armstrong's book "Programming Erlang".

Today is the pre-conference workshop day, and Beth and I have both opted for Joe Armstrong's "Fun with Erlang" session.

If you've not come across it before, Erlang is a functional language designed for concurrent programming. For someone from an object orientated background it is quite a paradigm shift, and the syntax takes some getting used to. Nevertheless, it is pretty obvious to me already that this is a language with some real strengths.

One thing I didn't realise during our preparation for the conference was that Erlang was developed from Prolog - which may explain why parts of it (pattern matching, for example) seemed strangely familiar (I did Prolog as part of a "Machine Intelligence" course at Surrey University).

Haskell and Microsoft's latest research language F# are of course aimed at the same problem domain. It will be interesting to see how strong the take-up of such functional languages is over the next couple of years, and whether we see the start of a longer term trend of increasing adoption.

Having said all that, as a (primarily) user interface developer I have no idea what practical use it is likely to be to us in the immediate future...but of course you never know.

Posted by Anna at 6:16pm | Get Permalink