Executing is one of the five project management process groups in the Project Management Body of Knowledge (PMBOK) from the Project Management Institute (2004).
Project Execution is where you build the project deliverables and hand them over to your customer, i.e. where you build and deliver the software. This is where most of the project effort is invested. Agile Project Planning says what you intend to do, when, and Agile Monitoring & Control helps you stay on track but Agile Project Execution is where you do the business.
Traditional project management has little to say about what you do during project execution because execution is dependent on the industry but project management is meant to be universal. Because Agile is tied to software development one of the Principles Behind the Agile Manifesto is about technical excellence:
Continuous attention to technical excellence
and good design enhances agility.
That focus on technical excellence means the Agile community have generally adopted the technical practices from XP (Beck, 2000) and these form the foundation of Agile Project Execution.
Self Organising Team
Two of the Principles Behind the Agile Manifesto relate to self-organising teams:
The best architectures, requirements, and designs
emerge from self-organizing teams.
Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.
Self-organisation is good because of the motivation that autonomy brings. You lose all that if the organisation can’t back up the autonomy with concrete support.
But I believe there needs to be limits to self-organisation. I’ve noticed that some self-organising team’s believe they have the autonomy to decide how they do absolutely everything. This is problematic because they might ignore good practice and the lessons learnt from the past. OK things change and this year’s idea of good practice may not be the same as last years, but I find that process change is slower than technology change. I’m not against change, I just like to see a good reason for it. The second problem I’ve noticed is that self-organising teams believe they can ignore organisational policy. Sometimes they can but often enough they team will have to back down and tow the line – and that is wasted effort.
The way I see it the project manager is the guardian of the process, and in that role they represent the business and it’s process demands, so not everything can be negotiable. Much is, but not everything. In the same way I wouldn’t tell a developer how to develop, I wouldn’t expect developers to tell me how to manage a project. There is a trick to managing each team and that is the key difference to Agile Project Management – the move from directing to shepherding.
It is a software project so of course there is coding. But developers don’t just code, they are expected to design simply, share the Code Ownership, adhere to Coding Standards, refactor as they go, ensure Two Pairs of Eyes check their code, write automated Unit Tests, and used an Automated Build, Integration and Test system to validate their work.
For more on good coding practices check out Code Complete by Steve McConnell (2004). McConnell is neutral on Agile but very keen on what works.
One of the Principles Behind the Agile Manifesto is:
Simplicity–the art of maximizing the amount
of work not done–is essential.
In the agile world a simple design is best. XP really hammers this concept (Beck, 2000). Design for today and refactor tomorrow. This is true for on-going design and any initial architecture during Agile Project Initiation. The design format can be as simple as a scribble on whiteboard or a more formal model depending on project demands. I make the Technical Lead responsible for the overall design and architecture of the product and “Two Pairs of Eyes” is how they keep abreast of what the developers are doing.
Coding Standards have been around since long before Agile, but they’re still a good idea. They define the formatting and design conventions that all developers must abide by. Adherence to a common standard makes it easier for developers to read and pick up work from their peers. The standard is enforced through “Two Pairs of Eyes”.
Two Pairs of Eyes
I’ve had very mixed experiences with managing projects involving pair programming so it is not something I am inclined to mandate. Instead I go for the “Two Pairs of Eyes” approach. This means all code (including test code) must be seen by two pairs of eyes before it goes into production.
I make the Technical Lead responsible for ensuring “Two Pairs of Eyes” happens, but they, and the team, decides how it happens. It could be through XP style pair programming, code walkthroughs, or formal code reviews. Whatever the approach taken the aim is to ensure conformance to coding standards, check for any gaps in the design and tests, and share knowledge among the team members.
Code Reviews or Walkthroughs
Boring but true, code reviews and walkthroughs are still useful. Most of the technical leads I’ve worked with preferred this model to Pair Programming, even if it meant they had to sit and review their team’s code themselves.
Pair Programming involves two people sitting at one machine writing the code together (Beck, 2000). At any moment one is driving the keyboard and the other is observing. Essentially it combines programming and peer review simultaneously. Those teams using Pair Programming typically write all production code in pairs but write other code individually.
There is a lot of hype and debate about Pair Programming. I’ve learnt not to mandate this practice except under specific circumstances. Research one of my teams was involved in indicates Pair Programming is good for more junior developers and slows down more senior developers (Arisholm et al., 2007). None-the-less I’ve also found that some developers like to work in that mode so I usually give people the choice.
The one time I mandate Pair Programming is when I have to ramp up a development team quickly. On one occasion we took a development team from eight developers to 32 within three months – this was only possible using pair programming. Pairing meant the new comers quickly learnt the problem domain and the local way of using the technology from the old hands.
Automated Build, Integration, and Test
In my experience manual system integration is painful and soaks up many person-days of effort over the course of a project. And the fact it is time consuming usually means integration is left to late in the project, which just compounds the problem. That manual effort is a waste given this work can be automated.
The best teams I’ve worked with have an automated method to build, integrate and test the evolving software. This approach gives fast feedback leading greater confidence of system stability.
There are a couple of options in how to do this.
Nightly Build and Smoke Test
A first step towards continuous integration is a daily build and smoke test, usually run overnight. The build script checks the code out of the source code repository, builds the system in a integration-test environment, and then runs the automated tests against the new instance. In the absence of automated tests the Tester can do a manual smoke test when they get into the office in the morning. The team then fixes any faults so the problem doesn’t repeat itself the next night.
Teams I’ve worked with have used ANT and NANT for the build scripts.
Note: the phrase “smoke test” comes from electrical engineering when running a test on faulty circuit may well have led to “smoke”.
Continuous Integration takes the Night Build and Smoke Test one step further. In this case, rather than kicking off the build once a day, the build is launched at every commit into the source code repository. That means feedback is even faster and it is clearer which new code caused problems since you can tie the fault to the latest check in which is probably only 5 or 10 minutes earlier.
Fixing faults highlighted by the continuous integration build is a priority for the team. They need to be fixed straight away as faults will interfere with the work of the developers and persistently broken tests undermine the whole continuous integration effort.
Developers normally work on a local copy of the source code. They are encouraged to integrate and test their changes every couple of hours to ensure their work doesn’t break anything.
Ideally the total build and test run will take less than 10 minutes. If it takes longer the developers won’t run the tests. To get the run time down you can try to speed up the individual tests or implement a stepped continuous integration framework.
A stepped continuous integration framework means splitting the tests into groups to run one after another. For example, the groups might be fast unit tests (essentially the basic smoke test), followed by slower acceptance tests, and then by even slower system tests. Very slow or resource intensive tests, for example builds against sizeable live databases or ones which obtain code-coverage metrics, might be run nightly. If one stage of the continuous integration pipeline does not complete successfully (i.e. one or more tests fail), subsequent stages are not executed.
Although not mandated by any of the Agile approaches, team using Continuous Integration usually collect metrics about the quality of the code. Examples are code-coverage (amount of code covered by unit tests), conformance to maintainability design principles (e.g. Lack of Cohesion of Services, Normalised Distance from the Main Sequence), and language-specific metrics.
Teams I’ve worked with have used Cruisecontrol, the original continuous integration tool, but there are others out there.
Testing gives us some certainty that the software does what it is meant to. But there are different types of testing, done at different times, often done by different types of people.
Test Driven Development is a practice advocated by XPers that involves writing a failing automated test before changing any code. This is a another XP practice I don’t mandate. Despite the fact I personally like this practice it is something that I find comes late in the progression to Agile maturity and for more novice teams it just adds confusion.
Night Build and Smoke Test and Continuous integration provide most value if the tests are automated. Most unit tests and many acceptance tests and system tests can be automated, and others may be automated as well. Exploratory tests are never automated.
An Agile Team does unit, acceptance, system, exploratory and regression testing within each Timebox alongside coding. Other types of testing require extra hardware, people, organisation, funds and/or specialist skills that are generally not available to the team, at least not in every Timebox. These types of tests are usually carried out outside of Timeboxes and often by specialists.
|Unit Testing||Defines the behaviour of specific system modules||Developer|
|Acceptance Testing||Defines the acceptance criteria for a User Story||Product Owner, Tester|
|System Testing||Defines edge cases, standards compliance, browser tests, etc||Tester|
|Exploratory Test||Manual, unscripted attempt to find new problems||Product Owner, Tester|
|Regression Testing||Repeat earlier testing to ensure changes haven’t broken the system||Developer, Tester|
|User Testing||Get feedback from the intended users||Product Owner, Designer|
|System Integration Testing||Integrate and test the work of multiple teams||Tester|
|Performance Testing||Asses ability to cope with volume, scale and load||Tester|
You’ll have to decide when and where to log the faults. I don’t bother logging any faults found on code which is currently being worked on in the Timebox. I do log other faults. These go into my task tracking database for scheduling later. I only put significant faults on the Release Plan. See Agile Project Planning.
The developers do the Unit Testing. Each Unit Test define a certain aspect of behaviour from the class or method being tested. Nearly all Unit Tests should be automated.
Acceptance Tests are pitched at the level of the User Story. Each Acceptance Test defines a certain aspect of the User Story in the form of a user action and the expected results. They are used to ensure the delivered software matches the Product Owner’s expectations. The Product Owner is meant to create the Acceptance Tests although they are often helped by the rest of the team, typically the Testers. Each User Story should have at least one Acceptance Test. Preferably Acceptance Tests are created before the Timebox Planning Meeting so they can be used in the meeting to clarify the purpose of the User Story. That helps the team estimate the size of the User Story. Initially Acceptance Tests are expressed in business language but ideally the Acceptance Tests are automated during the Timebox.
System Testing is an older form of testing only used if Acceptance Testing doesn’t cover everything. The problem with this style of testing is that it looks at the tests in isolation rather than tying them to a specific User Story.
Exploratory testing is manual, unscripted testing of a completed story, aiming to find bugs. The automated tests are meant to cover all the known problems. Exploratory testing is attempting to find new problems.
Agile puts a big emphasis on automated tests. This is expensive in terms of development time, but pays dividends when you need to run Regression Tests. Regression tests are any tests where the sole purpose is to ensure a code change hasn’t unexpectedly broken existing functionality. You can do this manually but from my experience testers take unreasonable short cuts if they have to do manual regression tests. Manual regression tests mean either only a small sample of the tests are run – like a Smoke Test – or no tests are run at all. Automation makes regression testing easy. Click the go button and wait for the Green light (or the Red).
User centred design demands continuous User Testing, i.e. getting direct feedback from the intended users of a system. User Testing should consider Usability, Accessibility and the Visual Design.
User Testing should occur early and often. Silicon Valley Product Group recommend creating a High Fidelity Prototype as part of Agile Project Initiation so it is possible that User Testing starts before development.
System Integration Testing
System Integration Testing occurs when the work of several Agile Teams must be combined into one overall solution. Integration can occur either continuously (recommended) after the end of each Timebox. The bulk of the tests should be automated, but some manual testing in a separate environment is likely to be necessary.
It isn’t strictly accurate but when I say performance testing I include load and stress testing. Full scale performance testing requires considerable time, expertise and hardware but I’ve found that the developers on the team can make a considerable contribution if they choose to focus on performance during development; of course the Product Owner would have to make this priority.
Refactoring is on-going improvement of the code base to make it easier to maintain and test (Fowler, 1999). It is a mechanism to avoid the build up of technical debt.
Some examples of efactoring: /p>
- Improving readability by giving classes and methods clear names names and clear responsibilities
- Reducing duplication so code changes only need to be applied in one place
- Removing unused code so you don’t have to maintain it.
Refactoring is much easier with automated tests and continuous integration.
In XP refactoring is part of Incremental Design in which developers are encouraged to introduce small design changes as new features are added. Personally I think some upfront design and/or architectural work is necessary as part of Agile Project Initiation.
XP encourages Collective Code Ownership so that any team member can understand and make changes to all of the code written by the team (Beck, 2000). In contrast Feature Driven Development (Coad, Lefebvre, & Luca, 1999) advocates an owner for each class. Personally I don’t have a preference. It just needs to be clear to the team what code ownership rules apply.
Arisholm, Gallis, Dybå, & Sjøberg. (Feb., 2007). Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise. IEEE Transactions on Software Engineering, 33(2), 65-86.
Beck, K. (2000). Extreme Programming Explained: Embrace Change. Reading, Massachusetts: Addison-Wesley.
Coad, P., Lefebvre, E., de Luca, J. (1999). Java Modeling in Color with UML: Enterprise Components and Process. Prentice Hall.
Fowler, M. (1999). Refactoring: Improving the Design of Existing Code. Addison-Wesley.
McConnell, S. (2004). Code Complete: A Practical Handbook of Software Construction [2nd Ed.]. Microsoft Press.
Project Management Institute. (2004). A Guide to the Project Management Body of Knowledge (PMBOK Guide) [3rd Edition]. Author.
Wake, W. (2005). Refactoring Workbook.