Software Craftsmanship…

Manifesto for Software Craftsmanship

I think this is long in the making and something that’s sorely needed in the industry. Too many people are getting away with too much these days. People call themselves “senior developers” but don’t have a clue when it comes to writing well understood and valuable code. Too many people entering the industry are being trained by institutions whose only interest is monetary gain and couldn’t care less about creating good software craftsmen. The revolution is upon us, fight the flood of ineptness and lack of any pride that seems to have taken over our industry and sign the manifesto; then live it!


Found an interesting article on generating test data entitled Testometer – Triangle Test.

Here’s a blurb from the site to entice you to go there, give it a whirl, it’s quite good, I managed to score a 7 out of 10:

“This is probably one of the most common question in software testing interview. This problem was first introduced by Myers, who was one of the first person to treat Software Testing as a different subject all together. This test check your ability to think about generating test data in a given condition”

An (agile) environment…

Scrum often talks about “ideal development environments” (Mike Cohn has a nice blog post on this entitled The Ideal Agile Workspace, well worth the read) and this often excludes anything related to engineering (hardware, software etc.), for the obvious reason that Scrum does not profess to offer guidance on engineering practices (but clearly states they are important) but rather on “management” practices.

Anyway, to the point of this post. I have a challenge at my current job in designing a development environment (hardware/software, not physical environment) geared for agile development. I had some high level ideas:

  • Somewhere to play – I figure it’s important for team members to have an area to “play” in, experiment with ideas, test theories, do proof of concept, without it impacting other teams, especially in environments where multiple teams are building toward a common project.
  • Somewhere to build – teams need somewhere to actually construct things, typically this will be their own machine, together with a (decent) source repository to share and control versioning of code. Probably overlaps with their “play” environment.
  • Somewhere to share – Collaborating teams are good teams (if not great teams). Encourage teams to collaborate by not only co-locating members but giving them access to tools to share information (wiki’s, blogs etc).
  • Somewhere to test – Generally I believe there are four “kinds” of testing, integrated into the sprint at various points: developer testing (does my stuff work and have I broken anything), integration testing (does everything works nicely together), quality assurance testing (does everything performs at a predefined “quality” level, this should include, but not be limited to, security, performance, durability etc.) and user acceptance testing (does the product meet the requirements). Each level of testing adds an additional feeling of “warm and fuzzy” and, as such, I believe each of these environments should be clearly separated, demarcated and controlled.
  • Automation – everything that is repetitive should be automated, plain and (maybe not so) simple. There is, however, a small “addendum” to this, in my opinion: The degree of risk determines the degree or automation, the higher the risk, the less the automation. Take for example releasing a product into a production environment, a simple enough task to automate, but can you risk it going pear shaped? I don’t know, am I wrong here? Perhaps this is just a trust/quality thing and if you have enough quality (and confidence) checks in place, releasing to a production environment is a trivial task that CAN be automated?

In line with the above this is what I think would make a good environment:

  • Every team member has his own environment (machine) that is capable or running as much as possible of the required frameworks/tools/databases etc.
  • Where something cannot reasonably be run on a team members machine (think large databases), establish a common “development” environment where a reasonable copy of the system can be run on. Set rules for working on this common environment (typically a team can sort this out amongst themselves). The team has full control of this environment. Possibly virtualize this environment so that it can be backed up/restored/recreated/duplicated easily.
  • Create a shared environment where team members can collaborate ideas, a wiki and a blog are good ideas, hell, even a twitter like setup might be interesting, of course this should not be used by management to track progress!
  • Create a source control environment with a high degree of stability (backups etc) and control. Define solid rules for using this system, retroactively “fixing” a source control system is a nightmare. Some good ideas for versioning/branching can be found here (specifically for CVS, but the concepts can be applied to any decent version control
  • Create an integration environment that is a closer match to the production environment, automate the building of systems to this machine on a continuous basis, giving feedback to team members (via emails/web pages etc). This is the first “gate of quality” for the work being produced. This environment should be automated as much as possible (if not completely), again consider virtualizing for the same reasons as the development environment. Consider this a “sanity check” environment to make sure everyones contributions “play nicely” together. This environment can run all the testing frameworks (although consider also running on the development environment?):
    • Unit testing (and test coverage testing)
    • Code style checking etc.
  • Create a quality assurance environment that near perfectly matches your production environment. Work is automatically “promoted” to this environment if it has succesfully passed all the integration quality requirements. Consider doing functional and non-functional tests (performance, security etc) here (although the integration environment is also a good candidate for this, but the quality assurance environment should be a closer “replica” of production, so more than likely will behave similar to it). This is where team testers will “see” the results of the team, it should be treated similar to a production environment and accorded the same level of “respect”. Members of the team that are doing the physical development must consider this environment as “production”. Remember also to “automate what you repeat” at this point, tests that are run on the system should be automated as much as possible. This is the second “gate of quality” for the work being produced
  • Create a user acceptance environment where business will ultimately see and comment on work done. This is the final “gate of quality” for the work being produced. This environment should be treated EXACTLY like the production environment and code should only be “promoted” to this system, not built. After business accepts on here, the code is “release ready”. Consider automating the process of releasing code to this environment, if possible, however the types and degree of testing that needs to be run on QA in order to promote it to this environment may prevent it.
  • Finally, of course, production environment… but that’s a whole different kettle of fish and I’ll leave that up to more qualified people to build 🙂

Ok, so those are my (perhaps random) thoughts on the subject. Have I left stuff out? Sure, I probably have. Will this work practically? Not sure, I think it should, I’m going to give it a try. Is it “agile enough”? Who cares, it can evolve, change. I think it can work though, with enough automation and enough discipline it should be a decent solution.

I welcome any (constructive) opinions.