Beliebte Suchanfragen
//

A Cultural Divide – Why The Hell Are We So Stubborn?

4.8.2014 | 10 minutes of reading time

“The only thing that is constant is change.”
– Heraclitus

Bonfire of the Vanities

Over the last few months, there have been quite a few clamorous controversies in the global programming community, driven by diametrically opposing views on fundamental principles, often becoming very emotional, even aggressive over time. Here’s a couple:

In all these cases, both sides of the discussion accuse the other of being wrong, having no tolerance for different opinions, causing harm to the community, etc. etc. Both have the best intentions, obviously, and both are eager to point out that it’s all about cost and benefits.

Having had similar discussions – big and small – on my own projects, I find it increasingly hard to talk about issues that involve what I consider good software craftsmanship, without running into situations where we either

  • agree to disagree, and one side or the other grudgingly accepts what the majority decides
  • don’t make important decisions at all, because we’re stuck repeating the same arguments
  • end up each stubbornly following our own way – which to me is the worst kind of outcome a discussion like this can have.

Why is that? Have we always been this hard-headed? And if not, how come we can’t agree on one set of values to guide us through our daily work? How is our individual perception of software development so fundamentally different that we can’t find a common denominator?

Let’s start by looking at the root of the problem:

99 little bugs in the code

Anyone who has ever written a program knows that there is one factor in software development, which is entirely unpredictable and can have catastrophic consequences: Where and when errors occur, and how long it will take to fix them.
It is of course the nature of errors to happen in the most inconvenient of circumstances, and unfortunately, more experienced programmers don’t necessarily make fewer mistakes – it just becomes harder and harder to find them (because they tend to be hidden within more complex programs), and they usually cause a lot more damage.

This is what I believe all of us can agree on: Errors take an unpredictable amount of time to find and to fix. Fixing, or even rewriting programs because of an error is always costly. And it’s a risk near impossible to calculate.

How To Prevent Errors From Being Made?

Unsurprisingly, the significance of errors quickly became obvious even in the earliest days of the computer industry, when programmers were literally writing software as sequences of ones and zeros. Consequently, people tried to find processes and techniques to safeguard against errors, to make programming more intuitive and prevent errors from being made, and to make successfully working programs reusable, so that the same problems didn’t have to be solved a second time. Probably the first major proof of this is Grace Hopper ‘s invention of the A-0 compiler in 1952: It enabled programmers to write programs in a modular way, allowing working subroutines to be reused, and the programs could not be executed if mistakes were encountered during compilation.

This approach helped create larger and more complex programs, written no longer by a single programmer, but teams, whose work products had to interact. And so, inevitably, it was discovered that while yes, programming itself had become more effective, a whole new set of problems – human problems – had to be solved. Lacking any previous experience with software development, the first and logical choice of the time was to look at working management models used in other industries, and to simply adapt their methods. Evidence of efforts to adapt the so-called “Waterfall Model “, which was mostly used in construction, mechanical production and engineering, can be traced back as far as 1956. It prescribed a strict sequence of steps to take, each executed until perfection and subsequently tested for quality, in order to create a software product:

  1. Requirements analysis
  2. Software design
  3. Implementation
  4. Testing
  5. Deployment
  6. Maintenance

These two ideas: Using well-defined languages, rules and restrictions during the build process to reduce errors, and using similarly restrictive process models to prevent human failures, constitute – in a nutshell – the beginning of a very long and as yet ongoing search for “the best way” to create software.

The Traditional Way: Exert Control

Of course, the “restrictive” approach to software development has spawned a large number of descendants over the years: Many variations of “Waterfall” have been tried (e.g., the V-Model ), many different approaches to each of the 6 steps were tested, and we certainly have come a long way since then. But overall, the common perception of software development is still much the same: It is considered an engineering discipline. And thus, the uncertainties of the development process are countered with measures that try to combine meticulous planning, strict quality assurance, and the utmost amount of control.

The same is true for the evolution of restrictive technologies: The invention of object oriented programming and encapsulation put limits on the use of both memory and functionality, static typing helped to restrict users of an object from using it in unintended ways. This led to the creation of frameworks and libraries, which also imposed opinions and assumptions on how programs could be written on top of them. Increased complexity was countered by the creation of more sophisticated editors, tools and IDEs.

The ultimate embodiment of this philosophy can be found in model-driven software development , which – by keeping user input at a very high level of abstraction and generating large parts of the actual executable code from predefined templates – takes away many expressive choices an individual programmer could make in favor of a direct representation of domain logic in the model, and thus imposes a rather strict top-down rule of how a program should best be written.

The Agile Way: Empowering Individuals

Incidentally, just a short while after the advent of the “Waterfall” process, a different kind of strategy emerged. Of course, strict planning and execution efforts were effective: The average number of defects decreased, and the quality of the software improved. It increased productivity, and helped decrease cost. But as more and more programs were written and put to practical use, a different dilemma had to be solved:
Systems that are built to a detailed specification are very rigid in their nature; they are manufactured to fit a very precise set of requirements, and once put in place, they are “done”. Some such programs, however, quickly lose their usefulness, as the environment, in which they operate, evolves. For example, a “rigid” program that calculates taxes, would need to be replaced every time the tax code is even slightly modified. The old code no longer generates value, and rewriting the entire system is a costly undertaking. In order to adapt to new circumstances, such programs must accommodate change, whenever underlying requirements change.

Change, however, had never been part of the plan. On the contrary: Traditional methods still try to eliminate change by using prolonged planning periods with many, many revisions to make sure every slight detail is considered, before the actual programming begins.

In the early days, a few projects recognized change as a factor that could not be ignored. To be able to react more quickly, they tried to move from a long running, linear development model to a shorter, incremental approach. This was attempted as early as 1957, at IBM . It was fairly successful, and though it didn’t have a name back then, the idea prevailed. Until finally, following a small number of experiments in the 1970s, the 1990s brought a sheer explosion of progressive software production methods, such as

and many more.

All of them had in common that they moved away from the heavy, traditional, restrictive methods and towards a light-weight, adaptive workflow that trusted in individuals and teams to do the right thing. This culminated in the release of the Agile manifesto in 2001:

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.

Obviously, Agile workflows and management processes did not go too well with the traditional (restrictive) tool set. The new movement preferred dynamic languages and duck typing over static type checking and extensive declarations, conventions and patterns over extensive configuration files, test-driven development over single-purpose APIs, collaborative processes over “rock star” individualists – and the focus shifted dramatically from putting efforts into the creation of powerful and heavily regulated frameworks towards knowledge transfer and the empowerment of developers. Consequently, the Software Craftsmanship movement was founded in 2009, which committed itself to a set of values, principles and professional behavior intended to create a common ground for teaching and self-improvement, and a new kind of trust between customers and developers: a trust in skills and professionalism, rather that rules and contracts.

The Crux of the Biscuit is the Apostrophe

We have now briefly encountered two very different views of what the best way to produce software is. I believe that in the differences between these two philosophies also lies the root of our conflicts, certainly of the ones I mentioned in the beginning of this article. Let’s put them side-by-side once more:

Traditional ApproachAgile Approach
PredictiveAdaptive
RestrictivePermissive
LinearIterative
Heavily regulatedSelf-organized
Tool-drivenSkill-driven
Prevent failureFail early and often
Plan everythingDefer decisions until necessary
Focus on meeting specificationsFocus on creating value
Prevent changeEmbrace change
Write documentation for everythingWrite documentation only when necessary
I own my codeThe team owns the code

Considering how drastically different these goals and values are – how could we not get in a fight when we argue over whether it is better to stick to the principles of a framework (Rails) or decouple from it (through TDD) ?
How could we not mourn the absence of “real Agile” ideals and craftsmanship values in heavily marketed Agile project management tools and certificates?
And from the other point of view, how can we stand being told we’re suddenly all wrong and need to change, when we’ve always known to write software in the same safe and predictive way, and this new approach negates our skills and takes away all control and certainty?

Depending on which point of view you take, it is indeed very hard not to feel either held back, or pushed too far. And I’m sorry to say I don’t have a solution for this dilemma, either. I have been down both roads, and I personally have come to embrace the promise of Agile and Software Craftsmanship: It fits my preferences, allows me to learn, improve and succeed at the same time, and in my opinion, it is much better suited to the way software development works in general.

And yet it wouldn’t be right to say it is the “only way”, or make it an absolute. Frank Zappa’s beautiful aphorism sums it up nicely: The meaning of things is always in the context.

I can certainly think of a number of circumstances, in which I would consider the traditional approach prudent and useful: When programming medical equipment, public transit systems, communication infrastructure, military hardware,… – in short, any time there is a very well-known, specific and concise set of requirements, absolutely no room for error, and little or no expected change. That’s when you use the hell out of “Waterfall”.
In all other cases – and I believe those to be the majority by a huge margin – I would most definitely prefer the other way. I also think we hardly every encounter projects “in real life”, where we’re able to go 100% either way; more often than not we’ll have to compromise at some point. A tendency, however, should usually be perceivable.

How do we manage to get along better, now knowing why we think so differently? Let’s first learn to respect what each of us brings to the table: There is immense value in both. Other than that, I have no idea. I would love to hear your suggestions – please feel free to comment.

share post

Likes

0

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.