Saturday, March 21, 2015

Why TDD is good for you


I must confess, I've realized that TDD is a must have in a project. Though I don't even like TDD, right now there's no better way of developing software. Writing software is like solving a perverted salesman traveling problem where the destination keeps changing and somehow the paths you took start to make less sense as the software progress. To know if you ended up with the best solution, you'd have to build all different versions of the problem to know if that's the best way of doing it, and that's not a viable idea.

Given that you have different developers with different backgrounds you must have something they know that somebody else is producing something accordingly something they know will have the same structure and background. And you'll have to keep doing it throughout the project and if you stray off the methodology you'll end up with inconsistencies in your code. And really in a project, you don't want to spend too much time (or any) discussing technical details related to pure code.

What I don't like with TDD

One important aspect of TDD is that I think TDD gets credit for is things which is TDD cannot solve. One thing is that it somehow creates layers, and this I think is deducted from the fact that TDD help with creating better abstractions, and that is a feature which is really nice with TDD, but it's really a side effect of practicing TDD. And it's not really TDD's fault for not being able to build layers because abstractions cannot be totally abstract, because if they could, that would mean that you could send in any data (or none) and get something which is exactly the data you want, which would be an impossible thing to do. There's only one thing that could pull that off, and I doubt that it actually exist, although there are many believers out there.

Also TDD does different thing to different languages so you get more from TDD in certain languages where others benefits less of it (one could argue that those languages which needs more TDD practices are bad languages).

TDD will influence your code and therefore your solution, and this will inevitably to “test induced design damage”. This means, as Conway's law states, your code will be tainted by TDD and code written with TDD in mind will be easier to integrate in a TDD project. That also means that code which is NOT written according to TDD will be a hard to fit in a TDD project (and no it's not about framework's abilities to be integrated). That also means that trying to use TDD in a project which is not started as a TDD project, will be very hard to start using the TDD practice. Most of the time, TDD projects are not being consistent and this will hurt you in the end.

Also I think TDD is showing that your language is failing describing what you really want, and you need to rely on something external to somehow verify that you have written something which is correct accordingly to your understanding. Instead of having TDD as some sort of “document”, I'd rather have the power of have all those assertions expressed by the code. I usually consider tests which are large a code smell since they give away that the code either does too many things or the code is not expressing enough intention or is not powerful enough.

But most importantly TDD creates some sort of focus on tests and unit tests, where TDD is not about those. It's about dealing with information and always confirming to that information and the test case is about verifying this, an sort of implementation of the TDD abstraction, but also we should be able to get rid of it.

If one consider that TDD is language and tech agnostic, meaning we need TDD to have a framework to actually deliver working code, the amount of work needed to verify your code should say something about the chosen language. If the language requires a lot of test cases to verify that you did something you intended to do would mean that that language is a poor choice. I'm not going to point on specific languages here and I leave this to some future discussion.

I really hope that we one day can get rid of TDD, but as for now, there are simply no better ways to write software.

Saturday, January 3, 2015

Good separation of concerns

Separations of concerns are really important when writing software. It's tightly coupled with working and correct code and it might not be obvious at first glance. One of my personal views of this is why non typed languages are not a good choice for anything longterm, types are crucial for good separations of concerns. I've worked with typed languages in large projects and they showed that even when using types its hard to keep things separated, although its not impossible achieving good separations using a non-typed language and most of the time it just breaks down. Just the fact that you need TDD to "verify" your code is a clear indicator of this.

An example of erroneous mixing of concerns:

val print_info = function(x){
    console.log('Variable x is of type "'+typeof(x)+'" and have the value of "'+x+'");
}
var x = "123"; // Variable x is type string with value "123"

print_info(x);

x-=0; // Variable x has now type number with value 123
 
print_info(x);
Output is;
Variable x is of type "string" and have the value of "123"
Variable x is of type "number" and have the value of "123"

The above example is a really simple but important aspect on mixing concerns but most of the time these things are more subtle and not so obvious.

There are several pitfalls when designing code and knowing when you are making good decisions when building software. One good rule is the "Gun rule" which is quite simple:

A modern gun today has very good separations of concerns (although a very despicable piece of technology). Most notably you have the bullet as an example on excellent separation of concerns. You can manufacture bullets separately but still deliver functionality, there are even room for making modifications to the bullets without needing to change the gun. Obviously there are certain factors you can't change without changing the gun, such as size.

One other factor is that a gun is useless without a bullet and a bullet is equally useless without a gun, so in functionality they are tightly coupled. For the gun to work you need to deploy the bullet with the gun. And this is a good indicator how they should be deployed, they should be deployed together. If you need different release cycles for them, you should separate them into two deployments artifacts, but they should share resources. This is really facilitated by Java VM by using dynamic class loading (one really good feature but for some reason not very well understood), other technologies might have problems with this and might require a full restart.

If you now equip the gun with a scope or perhaps a laser pointer this sure makes the gun better, but it is not entirely necessary for the operation of the gun. The gun will work with and without those additions and they are good separations of concerns by themselves. These are candidates for deploying on their own.

One misconception is that just because you need a different release cycle or you have identified a module with good separation of concerns, you need to deploy it on a separate instance. With the gun as an example; having a gun in one hand and the bullet in another doesn't render it more useful or more modular, though in fact it seems like a good idea and adheres to certain architectural ideas. If this idea should be brought to an extreme you should deploy each class in a separate runtime, but that however doesn't make it more modular or better.

You should look for those things which are possible to remove, but still maintain functionality. In fact being able to remove whole blocks of functionality without impacting function is a good indicator of good separations of concerns (adding them is the same). If you have to tear something apart is an indicator its not separated enough.

There's also another thing which is overlooked with separations of concerns is that too much effort is spent on making abstractions. So much abstractions actually harms your separation of concerns, everything is so abstract you have really no idea what happens.

As an example; instead of using a specific object to be able to "tunnel" data through layers you decide yo use a map like this:
interface SomeInterface {
   public abstract void someMethod(Map<String, String> map);
}

This is convenient because you could now cut through anything just because Map and String are both in a library which happens to be global. Now you can also bunch things together, which modularly, shouldn't be together and more important there's nothing that stops you to add more things which makes no sense at all. Fortunately in Java one could do this instead:
interface SomeInterface {
   public abstract void someMethod(Object map);
}
And then cast it to the Map whenever you need that information, but um, that kind of defeats a lot of things. Not only you lose the typing you also lose the intention and the function of the data. And when you loose that information, you also lose separations of concerns because now you don't know where you separations starts and where they ends.