Dreaming In Code Pdf

1 view
Skip to first unread message

Nakita Heitmann

unread,
Aug 4, 2024, 5:49:14 PM8/4/24
to unferpeging
Weaccomplish our mission through CTD Learns, our free code school, and through CTD Labs, where our paid apprentices gain real-world experience building apps for a wide variety of clients. Join us as we change the face of tech by expanding opportunity, innovation, and impact for all.

Is it feasible to expect 100% code coverage in heavy jquery/backbonejs web applications? Is it reasonable to fail a sprint due to 100% coverage not being met when actual code coverage hovers around 92%-95% in javascript/jquery?


Realistic

If you have automated testing that has been shown to cover the entire code base, then insisting upon 100% coverage is reasonable.

It also depends upon how critical the project is. The more critical, the more reasonable to expect / demand complete code coverage.

It's easier to do this for smaller to medium sized projects.


Unrealistic

You're starting at 0% coverage ...

The project is monstrous with many, many error paths that are difficult to recreate or trigger.

Management is unwilling to commit / invest to make sure the coverage is there.


I've worked the gamut of projects ranging from no coverage to decent. Never a project with 100%, but there were certainly times I wished we had closer to 100% coverage.

Ultimately the question is if the existing coverage meets enough of the required cases for the team to be comfortable in shipping the product.


We don't know the impact of a failure on your project, so we can't say if 92% or 95% is enough, or if that 100% is really required. Or for that matter, the 100% fully tests everything you expect it to.


It is very expensive to write tests, it is code that has to be written and tested it self, it is code that has to be documented in what it actually trying to test, it is code that has to be maintained with business logic changes and the tests fail because they are out of date. Maintaining automated tests and the documentation about them can be more expensive than maintaining the code sometimes.


This is not to say that unit test and integration tests aren't useful, but only where they make sense, and outside of industries that can kill people it doesn't make sense to try and test every line of code in a code base. Outside these critical kill lots of people quickly code bases, it is impossible to calculate a positive return on investment that 100% code coverage would entail.


In computability theory, the halting problem is the problem ofdetermining, from a description of an arbitrary computer program andan input, whether the program will finish running or continue to runforever.


Alan Turing proved in 1936 that a general algorithm to solve thehalting problem for all possible program-input pairs cannot exist. Akey part of the proof was a mathematical definition of a computer andprogram, which became known as a Turing machine; the halting problemis undecidable over Turing machines. It is one of the first examplesof a decision problem.


Basically, the difficult to test parts have been shunted to areas where they don't necessarily count as "code". It's not always realistic, but note that independent of helping you test, all of these practices make your codebase easier to work on.


100% code coverage for unit tests for all pieces of a particular application is a pipe dream, even with new projects. I wish it were the case, but sometimes you just cannot cover a piece of code, no matter how hard you try to abstract away external dependencies. For example, let's say your code has to invoke a web service. You can hide the web service calls behind a interface so you can mock the that piece, and test the business logic before and after the web service. But the actual piece that needs to invoke the web service cannot be unit tested (very well anyway). Another example is if you need to connect to a TCP server. You can hide the code that connects to a TCP server behind a interface. But the code that physically connects to a TCP server cannot be unit tested, because if it is down for any reason then that would cause the unit test to fail. And unit tests should always pass, no matter when they are invoked.


A good rule of thumb is all of your business logic should have 100% code coverage. But the pieces that have to invoke external components, it should have as close to 100% code coverage as possible. If you cannot reach then I wouldn't sweat it too much.


Much more important, are the tests correct? Do they accurately reflect your business and the requirements? Having code coverage just to have code coverage doesn't mean anything if all you doing is testing incorrectly, or testing incorrect code. That being said, if your tests are good, then having 92-95% coverage is outstanding.


I'd say unless the code is designed with specific goal of allowing 100% test coverage, 100% may be not achievable. One of the reasons would be that if you code defensively - which you should - you should have sometimes code that handles situations that you're sure shouldn't be happening or can't be happening given your knowledge of the system. To cover such code with tests would be very hard by definition. To not have such code may be dangerous - what if you're wrong and this situation does happen one time out of 256? What if there's a change in unrelated place which makes impossible thing possible? Etc. So 100% may be rather hard to reach by "natural" means - e.g., if you have code that allocates memory and you have code that checks if it has failed, unless you mock out memory manager (which may not be easy) and write a test that returns "out of memory", covering that code might be difficult. For JS application, it may be defensive coding around possible DOM quirks in different browsers, possible failures of external services, etc.


So I would say one should strive for being as close to 100% as possible and have a good reason for the delta, but I would not see not getting exactly 100% as necessarily failure. 95% can be fine on a big project, depending on what the 5% are.


If you are starting out with a new project, and you are strictly using a test-first methodology, then it is entirely reasonable to have 100% code coverage in the sense that all of your code will be invoked at some point when your tests have been executed. You may not however have explicitly tested every individual method or algorithm directly due to method visibility, and in some cases you may not have tested some methods even indirectly.


Getting 100% of your code tested is potentially a costly exercise, particularly if you haven't designed your system to allow you to achieve this goal, and if you are focusing your design efforts on testability, you are probably not giving enough of your attention to designing your application to meet it's specific requirements, particularly where the project is a large one. I'm sorry, but you simply can't have it both ways without something being compromised.


If you are introducing tests to an existing project where testing has not been maintained or included before, then it is impossible to get 100% code coverage without the costs of the exercise outweighing the effort. The best you can hope fore is to provide test coverage for the critical sections of code that are called the most.


In most cases I would say that it you should only consider your sprint to have 'failed' if you haven't met your goals. Actually, I prefer not to think of sprints as failing in such cases because you need to be able to learn from sprints that don't meet expectations in order to get your planning right the next time you define a sprint. Regardless, I don't think it's reasonable to consider the code coverage to be a factor in the relative success of a sprint. Your aim should be to do just enough to get everything to work as specified, and if you are coding test-first, then you should be able to feel confident that your tests will support this aim. Any additional testing you feel you may need to add is effectively sugar-coating and thus an added expense which can hold you up in completing your sprints satisfactorily.


Is there some particular obstacle you are encountering that is preventing you from hitting those last few lines? If not, if getting from 95% to 100% coverage is straightforward, so you might as well go do it. Since you're here asking, I'm going to assume that there is something. What is that something?


Martin Fowler writes in his blog: I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing.


However, there are even standards that mandate 100% coverage at unit level. For example, it is one of the requirements in the standards of the European spaceflight community (ECSS, European Cooperation for Space Standardisation). The paper linked here, tells an interesting story of project that had the goal of reaching 100% test coverage in an already completed software. It is based on nterviews with the involved engineers who developed the unit tests.


The act of programming correctly is next to impossible with today's tools. It is very difficult to write code that is totally correct, and doesn't have bugs. It is just not natural. So, with no other obvious option, we turn to techniques like TDD, and tracking code coverage. But as long as the end result is still an unnatural process, you will have a hard time getting people to do it consistently and happily.


Just look at all the software out there today. Most of it messes up pretty regularly. We don't want to believe this. We want to believe our technology is magical and make us happy. And so we choose to ignore, excuse, and forget most of the times our technology messes up. But if we take an honest appraisal of things, most of the software out there today is pretty crappy.


The later is extremely incomplete and experimental. Actually it is a project I started, but I believe it would be a huge step forward for the craft of programming if I could ever get myself to put the time into it to complete it. Basically it is the idea that if contracts express the only aspects of a classes behavior that we care about, and we are already expressing contracts as code, why not only have the class and method definitions along with the contracts. In that way the contracts would be the code, and we would not need to implement all the methods. Let the library figure out how to honor the contracts for us.

3a8082e126
Reply all
Reply to author
Forward
0 new messages