I think that the question what the best approach of developing software is nearly as old as software development itself is. And additionally it can not be answered because in my opinion it is totally dependent on the environment. That means if you are working in an agency on projects or in an enterprise company on products, which language and tools you are using, how mature you and your team are, …
So for the last few years test driven development seems to be the rising star in IT industry especially when you would like to have good quality. But for me it is kind of curious because it seems to me that if you have a look in the software in the market there are two kinds of software: Well tested and successful from a business point of view. Okay, this might be really black and white thinking but to be honest I have never seen a successful enterprise software that has 100% test coverage with automated tests. But of course everyone in the industry with a technical background strives for TDD. I would rather say that most of the software I have seen is mainly tested manually. Imho the reason for this that mostly I have seen some legacy software that is in the market for 3 or more years.
I think that if you start something new that a TDD approach gives you plenty of advantages, e.g. you can implement software against a specification. This comes especially handy for ETL or REST services in my opinion. But I do not think that there is really a good KPI what good quality of a software means especially stuff like Code Coverage really are the wrong approach. Additionally testing is expensive: From my point of view if you really go test driven and also write tests in all levels of the test pyramid it is easy to spent 60% of the total effort in testing. Yes this pays back. But this takes time and also might not pay back if for example you do an experiment and decide to remove a feature because it is not successful.So I think that a good team really knows best what to test and what not and if TDD makes sense for them. And especially when to do it, in the proof of concept or later.
But for me the main question is how to get to a certain level of tests and to convince people of the advantages of tests. And also if you follow my approach to do an experiment with not so good tested code and later on decide that the feature stays live the question is how to improve quality and amount of tests?
I see only one way to approach that and that also might be a good way to improve legacy software: TDBF – test driven bug fixing. That means as soon as there is a bug the first task is not to fix the bug but to be able to reproduce it in an automated way. And this should be done ideally on the developers machine. I think the advantages are clear. On the one hand the developer is able to determine if his fix works already early in the development process, root cause analysis becomes much easier and additionally the bug will not appear a second time if that test is run automatically e.g. after each merge. It sounds easy but often different environments behave completely different and writing good tests is a hard task. But definitely for me that is the thing to do to be able to maintain software and also to improve the skills of your engineers. And of course this is much more pragmatic than just saying „Now we do TDD“. Lets hope that usefull and pragmatic TDD evolves out of TDBF.
Please feel free to share your thoughts and have a nice week!