Oh No! Bugs! Activation Code [key Serial Number]

0 views
Skip to first unread message
Message has been deleted

Macabeo Eastman

unread,
Jul 9, 2024, 4:42:35 PM7/9/24
to piesmeltansay

The fundamental nature of coding is that our task, as programmers, is to recognize that every decision we make is a trade-off. To be a master programmer is to understand the nature of these trade-offs, and be conscious of them in everything we write.In coding, you have many dimensions in which you can rate code:

Over the years I've heard various estimates for the average number of exploitable bugs per thousand lines of code, a common figure being one exploitable bug per thousand lines of code. A Google search gives some much lower figures like 0.020 and 0.048 on one hand but also very high figures like 5 to 50. All these numbers are for code that hasn;t been reviewed nor tested for security.

Oh No! Bugs! Activation Code [key serial number]


Download File https://tlniurl.com/2yUoMq



Have any serious empirical studies been done on this subject? Such a study could be done based on well reviewed open source software by checking how many security holes have been reported over the years. If not, where do these numbers come from?

Programming Language - Some languages let you do very unsafe things; e.g., C makes you directly allocate memory, do pointer arithmetic, has null terminated strings, so introduces many potential security flaws that safer (but slightly slower) languages like ruby/python do not allow. Purpose of application? What type of coder/code review?

Type of Application - if a non-malicious programmer writes a relatively complex angry bird type game in Java (not using unsafe module), there a very good chance there aren't any "exploitable" bugs -- especially after testing; with the possible exception of being able to crash the program. A web application in PHP written by amateurs, has a good chance of having various exploitable flaws (SQL injection, cross-site scripting, bad session management, weak hashing, remote file inclusion, etc.).

Furthermore, counting the number of "exploitable" bugs is not a straightforward task either; if finding bugs was straightforward they'd be removed in code review. Sometimes many bugs only arise due to subtle race conditions or complex interactions among programs/libraries.

As someone who security tests web apps for fun and profit the security defects per thousand lines is way higher in common open source web apps than the 0.08 figure quoted. Presumably the issue is CVEs record only security defects found and reported via the relevant channels, you need metrics where the code has undergone systematic reviews so that at least low hanging security defects have been detected, otherwise what you are measuring is some fraction of the testing effort.

This section contains descriptions of common bug check codes that are displayed on the blue bug check screen. This section also describes how you can use the !analyze extension in the Windows Debugger to display information about a bug check code.

If a specific bug check code does not appear in this topic, use the !analyze extension in the Windows Debugger (WinDbg) with the following syntax (in kernel mode), replacing with a bug check code:

Provide the stop code parameters to the !analyze command to display any available parameter information. For example, to display information on Bug Check 0x9F: DRIVER_POWER_STATE_FAILURE, with a parameter 1 value of 0x3, use !analyze -show 0x9F 0x3 as shown here.

When a bug check occurs, a dump file may be available that contains additional information about the contents of memory when the stop code occurred. To understand the contents of memory during a failure, knowledge of processor memory registers and assembly is required.

Live Dump stop codes to not reset the OS, but allow for the capture of memory information for abnormal situations where the operating system can continue. For information about live dumps, see Bug Check Code Reference - Live Dump.

I've heard people say (although I can't recall who in particular) that the number of bugs per line of code is roughly constant regardless of what language is used. What is the research that backs this up?

Edited to add: I don't have access to it, but apparently the authors of this paper "asked the question whether the number of bugs per lines of code (LOC) is the same for programs written in different programming languages or not."

Industry average experience is about 1-25 errors per 1000 lines of code for delivered software. The software has usually been developed using a hodgepodge of techniques (Boehm 1981, Gremillion 1984, Yourdon 1989a, Jones 1998, Jones 2000, Weber 2003). Cases that have one-tenth as many errors as this are rare; cases that have 10 times more tend not to be reported. (They probably aren't ever completed!)

Harlan Mills pioneered "cleanroom development," a technique that has been able to achieve rates as low as 3 defects per 1000 lines of code during in-house testing and 0.1 defects per 1000 lines of code in released product (Cobb and Mills 1990).

Are there any studies that aggregate data over a wide population of contributed code, that establish a correlation between amount of code written in a commit and the # of bugs discovered in that code ? It'd be hard to do in github without knowing whether a change was due to new functionality or a bug, but you could determine a relation between lines of code per commit and how much thrashing eventually goes on in that code.

If all your program does is Console.WriteLine over and over.. chances are it won't have any bugs no matter how big it gets. If you're writing the next great document database, chances are you'll have a lot of bugs.

You couldn't scrape this information from github because you don't know how hard the problems people are trying to solve.. If most projects on gitHub are the complexity of a tic tac toe game, again, you probably won't see a ton of bugs. Your analysis could fool you and say "Wow codebases can expand with relatively few bugs or none at all!".

The only metric that I'm familiar with that tries to relate possible defects to program size is one of Halstead's complexity measures. The figure used is B = (E^(2/3))/3000 or B = V/3000 where B is the number of delivered bugs, E is the amount of effort, and V is the program volume. If you simplify down to the counted values, these equate to either B = ((n1/2)(N2/n2))/3000 or B = (N1 + N2) * log2(n1 + n2) where n1 is the number of distinct operators, n2 is the number of distinct operands, N1 is the total number of operators, and N2 is the total number of operands.

If you freeze the development as long as it needs to be, can you actually fix all the bugs until there is simply not a single bug, if such a thing could be verified by computers? What are the arguments for and against the existence of a bug-free system?

By bugs I meant from the simplest typos in the UI, to more serious preventative bugs that has no workaround. For example a particular scripting function calculates normals incorrectly. Also even when there are workarounds, the problem still has to be fixed. So you could say you can do this particular thing manually instead of using the provided function but that function still has to be fixed.

First things first--You're ignoring the bigger picture of how your program runs. It does not run in isolation on a perfect system. Even the most basic of "Hello World" programs runs on an operating system, and therefore, even the most simple of programs is susceptible to bugs that may exist in the operating system.

The existence of libraries makes this more complex. While operating systems tend to be fairly stable, libraries are a mixed bag when it comes to stability. Some are wonderful. Others ... not so much ... If you want your code to be 100% bug free, then you will need to also ensure that every library you run against is completely bug free, and many times this simply isn't possible as you may not have the source code.

Then there are threads to think about. Most large scale programs use threads all over the place. We try to be careful and write threads in such a way where race conditions and deadlock do not occur, but it simply is not possible to test every possible combination of code. In order to test this effectively, you would need to examine every possible ordering of commands going through the CPU. I have not done the math on this one, but I suspect that enumerating all of the possible games of Chess would be easier.

According to this article, the on-board software for the Space Shuttle came very close -- the last three versions of the 420,000 line program had just one error each. The software was maintained by a group of 260 men and women. A large number of these people were verifiers, whose sole purpose was to find errors.

The upgrade of the software to permit the shuttle to navigate with Global Positioning Satellites impacted just 1.5% of the program, or 6,366 lines of code. The specs for that one change ran 2,500 pages. The specs for the overall program filled 30 volumes and ran 40,000 pages, or an average of ten lines of code per page of the spec.

Mathematically it MIGHT be possible to write 'bugless' software of such complexity, depending on how you define 'bug'. Proving it MIGHT also be mathematically possible, by designing a test system that would exercise every line of code in every possible way - every possible use case. But I am not sure - if you are dealing with a system that does complex calculations, you may run into an 'infinity problem'...

Usable: The system fulfills the essential requirements it was designed for. There may be bugs - but they will be in 'edge cases' - outliers, or annoyances, not bugs that compromise the fundamentals of the system - robust.

How many bugs can we expect a huge program to have anyway? One number I found was "10 defects per 1000 lines" (Code Complete 2nd edition, page 517 - merely used an example, not quoting any data)That gives us around 200 000 to 300 000 bugs in your software.Fortunately, we have ways to improve the quality of the program. Unit testing, code reviews and ordinary manual testing are known to reduce the number of bugs. Still, the number will still be high.

aa06259810
Reply all
Reply to author
Forward
0 new messages