Wednesday 2nd April, 2008
This morning was a bit of a rush as we did some last minute fine tuning on the "Death by Powerpoint" rolling presentation we will be using on our stand at the sponsors reception this evening.
When we finally arrived in the conference lobby we immediately bumped into Kevlin Henney (who thanked us for blogging one of his quotes from last year) and Alan Lenton. The general consensus was that the "Bad Bug Day" motif is a fun one, so we may just have to get some more teeshirts printed.
At lunchtime we set up our stand ready for the Sponsor's reception in the evening:
The evening was every bit as busy as you'd expect given the free wine:
We had a steady flow of people coming to see us demonstrate Visual Lint:
After the reception a whole bunch of us headed out into town in appropriately disorganised fashion to a couple of Cantonese and Thai restaurants. Amid much hilarity and far too much Tiger beer we headed back in groups to the hotel bar where a significant number of us proceeded to become thoroughly Lakosed. * I finally left the bar at 3:30am...:wtf:
<p class="footnote>* A state of advanced inebriation caused by being repeatedly being bought drinks (irrespective of protestations to the contrary) by John Lakos, author of Large Scale C++ Software Design. Despite being a recognised hazard of being in the bar at the ACCU Conference, every so often even the most wary of us can be caught out occasionally...
Anyway, on to today's sessions:
Value Delivery for Agile Environments
(Tom Gilb)
The thrust of this session was that although agile methods are better at organising development tasks than conventional methods, they do not really focus on the needs of stakeholders. For example, they do not provide guidance on the business value of each potential task.
By contrast, Evolutionary Project Management (EVO) is more focused on business goals than tasks and iterations/sprints. In fact, an approach such as EVO can be used together with agile approaches (for example) Scrum to great effect.
EVO is based on continuous measurement and reassessment of business metrics, stakeholder requirements, budgets, goals, impact estimation (e.g. via impact estimation tables), estimating, planning and tracking. Key principles include:
-
Critical Stakeholders determine the values a project needs to deliver
-
Values can and must be quantified numerically (no matter what it is, the chances are somebody has measured it in some way. It is of course critical that agreement is reached on how individual values are measured).
-
Values are supported by a Value Architecture (defined as "anything you implement with a view to satisfying stakeholder values").
-
Value levels (the degree of satisfaction of value needs) are determined by timing, architecture effect and resources.
-
The required value levels can differ for different scopes (e.g. where, which stakeholder). Setting value levels too high can kill projects by delaying delivery and inflating costs.
-
Value can be delivered early. Plan to deliver real value to stakeholders as early as possible, and continue to deliver additional value continuously.
-
Value can be locked in incrementally - deliver production quality systems throughout, and not "quick fixes".
-
New values can be discovered by stakeholders in response to delivered values. It therefor follows that developers must be in direct contact with stakeholders.
My initial reaction was that EVO in its pure form may not be entirely suitable for a small organisation due to the sheer amount of analysis required; however this is no different from the situation with any process/methodology - Scrum (for example) probably doesn't work particularly well in a small organisation either. The lesson is of course to take the good bits, and leave those which bring in more overhead than you need. That said, Tom apparently has a case study involving a 3 person team which isn't too far removed from the micro-ISV world we're familiar with.
Either way, EVO is definitely an approach professional developers and project managers should be aware of. The majority will of course probably carry on in blissful Waterfall-esque ignorance as always...
Santa Claus and Other Methodologies
Gail is an active member of the ACCU South-Coast group, and a very entertaining and thought provoking speaker.
"I don't believe in methodologies"
Methodology is strictly the study of methods etc rather than their application, but the use of the name in conjunction with development processes can (unfortunately) lend them "instant" credibility in the eyes of some - the "follow this and everything will be perfect" delusion. The real world is of course not like that - any "process" is only going to work well if you buy into it and tailor it to your own needs. If you follow a process blindly, it will almost certainly fail you.
Gail followed her introduction with a brief historical foray into a long dead software development "methodology" called RTSAD, and a project development process called Goal Directed Project Development (GDPM), outlining the failures of both when applied within an organisation to illustrate her point.
New methodologies offer new buzzwords, which can lead companies to adopt them for the wrong reasons. Particular groups of people seem to be most susceptible to this:
-
Budget holders
-
Seekers of the "One True Way"
-
Advocates of the "latest big thing"
-
Grand planners
(the first and last are often managers; the second and third are often developers).
At the end of the day, although these are people problems - and not process problems - persuading people to change the way they work is all too often exceptionally hard.
The lesson is of course not to look at the solution (e.g. "adopting <Methodology X> will solve all our problems"), but at the real problem. Once the problem has been identified, potential solutions can be visualised and investigated.
Some questions we could (for example) ask about a potential solution include:
-
How does this address our specific problem?
-
What does this step/artifact/process do for us?
-
What demands does it make of us?
-
Can we integrate this step/artifact/process and its tools smoothly with what we have?
-
Does it impede continuous improvement?
As ever, there is (unfortunately) no magic bullet.
Robots Everywhere
We met Bernhard for the first time last year when he ran a very interesting session on architectural analysis tools. This year he has turned his hand to looking at the world of robotics.
Bernhard started the session with a fascinating illustrated summary of the state of the art today, including competitive events such as RoboCup (robot football) and the DARPA Grand Challenge (autonomous vehicles).
Concurrency and (naturally) functional programming are of course fundamental to robotics. Although there are a number of established players in this field, Microsoft are now targeting the emerging home and educational markets with Microsoft Robotics Studio (MSRS) and the parallel computing initiative.
Microsoft apparently learned that typically 80% of the development time on robotics project is currently being spent on developing limited use frameworks, and the MSRS effort is in part aimed at generalising these sorts of efforts, although a secondary aim is obviously to support adoption of the .NET Framework within robotics applications. MSRS has a heavily concurrent and distributed architecture, which Bernhard spent some time describing in depth. It was also interesting to see that C# was being used rather than the (I would have thought) more well suited F# functional language.
All in all this is a fascinating subject, and no doubt one which will become more and more prominent.
A Tale of 2 Systems
This sesssion looked humourously at the long term impact of design on a software system, using two real examples. Pete's assertion is that the quality of a project is determined mostly by the quality of its design.
Good designs should of course be:
-
Easy to modify
-
Easy to extend
-
Flexible enough to accommodate change without stress
-
Fit for purpose
-
Easy to understand
Pete gave examples of two similar systems he had worked on to illustrate these principles:
"The Messy Metropolis"
This was a spaghettified mess, the code for which had grown "organically" over time with very little thought. Pete rather appropriately illustrated it with a picture of a turd! We've all seen systems like this, so I'm sure I don't need to elaborate further (but please look at the slides if you really do feel the need to see the turd...).
Eventually, such systems grind to a halt and effectively force a complete rewrite - whereupon the cycle can all too often repeat, and at huge cost.
Design problems can of course be caused by company culture (e.g. empire building, not giving developers time to rectify smells in the design) and poor development processes with insufficient thought given to design issues.
Pete ably described the problems this particular projects caused within the company at every level from support to sales, marketing, customer support and manufacturing. It (not surprisingly) eventually ended up in a costly rewrite - which is of course a high risk proposition in its own right.
"Design Town"
This project was different from the outset. The project was run by a small, flat team with a clear roadmap and following a defined process (XP in this case).
Perhaps crucially, the design was limited to that which was sufficient to meet the requirements (a key agile principle, in my view) without attempting to include detailed provision for possible future requirements.
In this system, the design made it far easier to add new functionality. It was straightforward to locate where specific functionality lay, and new functionality gravitated naturally to the right place. Of course, bugs were also easier to locate and fix.
Most importantly, the software developers took responsibility for the design. This last point is (in my view) fundamentally important - some developers I meet are sadly lacking in the essential motivation to do this.
So, what lessons can be learnt from these two projects?
-
Design matters, but it does not happen without conscious effort
-
People are key (this touches on Gail's session earlier)
-
The team must be given (and accept) responsibility for design
-
Good project management
How then can we improve a bad design?
-
First of all, we can't improve it unless we understand it. There is always information in SCC, documents etc. which can reveal aspects of the history of a project, so why not go digging and see what you can find?
-
Describe the process which seems appropriate to deal with the state of the existing design (run away, re-write, refactor etc.)
-
Plan a new design based on the requirements and constraints we know now (as opposed to those we thought we knew at the outset)
-
Plan a roadmap for how to take the codebase to where we want to be, and continuously refine it as you proceed along the route.