Bug Free Programming

A good program does what it needs to do and does not fail. This seems simple but is very rare. There are too many programmers who believe that writing bug-free programs is impossible. They believe this because bugs always remain regardless of their time and effort. This is a flawed belief.

To understand bugs and how to write a bug free program, begin at the low level. Consider adding two numbers, or more specifically two integers. To be even more specific, two unsigned eight-bit integers. Programmers would write this as: a = b + c. This is the first mistake. To understand the nature of this mistake, one must first understand binary integer arithmetic. We start with a two-bit adder.

0 + 0 = 00
0 + 1 = 01
1 + 0 = 01
1 + 1 = 10

A two-bit adder is simple to understand. Since this is binary, 2 is represented as 10. To begin adding eight-bit numbers you add the two least significant digits using the above table. This gives the result and a carry digit. Next a three-bit adder adds the next digits along with the carry from the previous digit. The table for this is:

0 + 0 + 0 = 00
0 + 0 + 1 = 01
0 + 1 + 0 = 01
0 + 1 + 1 = 10
1 + 0 + 0 = 01
1 + 0 + 1 = 10
1 + 1 + 0 = 10
1 + 1 + 1 = 11

This has a maximum value of 11 (or 3) which is also two-bits. All subsequent bits can be added using a three-bit adder. Eight-bits mean eight digits, each containing 0 or 1. Let’s add two simple eight-bit unsigned integers.

00110101  (53)
01000110 (70)
-------- ---
01111011 (123)

This addition creates the correct output. Now consider this addition:

 10000000 (128)
10000000 (128)
-------- ---
100000000 (256)

This addition has nine result bits, but the register only holds eight bits. So the eight right-most bits are copied into the register and the left bit is kept as a flag or ignored. In x86 architecture, the carry and overflow flags are set following each addition. Unfortunately you can only check these in assembler because higher level languages do not let you check flags. Higher level languages don’t check flags because not all chips set those flags and the language must work across a number of chips. The computer sees this operation as 128 + 128 and the result is 0 instead of 256.

This example is simple because it demonstrates a principal. This principal is that nearly every single programming operation results in multiple branches. In the case of addition, one branch represents a normal result and the other branch represents an overflow result. In the rare case that an overflow is part of the algorithm, both branches merge into the next operation. Otherwise nearly every operation branches into two or more paths. This means that programs are very rarely linear and are full of branching possibilities.

This is where modern programming languages have problems. Every modern programming language is linear in nature. Code is written in linear fashion. When one statement finishes the one following it begins. Programming languages are even simple text editors, with one character after another. Programming languages developed this way because that’s how people think, so it has evolved to fit the common thought processes. Even the processors and the lowest level language (assembler) has the basic concept of linear instruction sequences. This is fundamentally flawed.

Once the concept of instruction branching is introduced, everything from hardware to programming languages change. An overflow in addition is not a bug, it is just another possible outcome. A program is simply a group of instructions that branch and merge together. If every outcome is properly managed, and a program has been properly designed, then code will contain no bugs. A programming language in this environment will resemble a flow chart rather than a text document.

Once you start coding in this kind of environment, the way you think changes. This change is important because you organize and structure code differently. You design it to intrinsically reduce the number of branches. While modern languages aren’t ideally suited for this type of programming, you can still implement the basic idea. This helps drastically reduce, and in most cases eliminate, programming bugs.

This concept naturally develops as skill grows. When bugs occur, you track them down and learn from them. During this process, you tend to check more and more statements for errors. This increases the amount of code, until you start planning for branching and exceptions. Once you plan for exceptions the code becomes simpler again. Another path for learning involves complex algorithms. If a programmer enjoys complex algorithms then they quickly learn that complex algorithms are nonlinear. This helps break down the linear mindset, opening the mind to new methods. After learning this you realize that Turing Machines are also nonlinear and that programming has returned to where it started.

Once you become accustomed to this kind of programming, you will no longer look favorably on so-called modern methods of exception handling. The trend towards ‘throw’ and ‘catch’, or similar syntax, will send a shiver down your spine. Each function should handle its own errors and return several possible outcomes. These should be checked in calling code. To a programmer skilled in this type of programming, a novice programmer’s code is a list of statements they assume will work. This code rarely works properly. Fixing it would take so much time and effort that it’s often easier to completely rewrite it.

The programming concept of handling all branches is difficult to master, but it provides vast improvement on program quality. It should be learned by senior programmers who want to push their skill to a new level.

Together our conversations can expand solutions and value

We look forward to helping you bring your ideas and solutions to life.
Share the Post:

Leave a Reply

Your email address will not be published. Required fields are marked *