Optimization in programming is a complex field. Low-level optimization involves rearranging instructions to reduce their count. Mid-level optimization focuses on optimizing based on the number of records. High-level optimization utilizes conceptual approaches.
Low-level optimization requires an understanding of both the programming language and the hardware. For instance, integer arithmetic is managed by the ALU, while floating-point arithmetic is handled by the floating-point processor. Knowing the execution time of each instruction, usually measured in clock cycles, is beneficial. Processor companies such as Intel and AMD offer comprehensive manuals to aid in comprehending their chips. While these manuals span thousands of pages, investing time in studying them is highly valuable if you’re genuinely dedicated to programming. Even without a profound grasp of the chip’s intricacies, you can still perform low-level optimization. Consider the following code:
int total = 0; for (int i = 0; i < 5; i++) total += i; printf("%d\n", total);
Think through every instruction called when executing that code.
int total = 0; int i = 0; if (i < 5) goto finished; total += i; i++ if (i < 5) goto finished; total += i; i++ if (i < 5) goto finished; total += i; i++ if (i < 5) goto finished; total += i; i++ if (i < 5) goto finished; total += i; i++ if (i < 5) goto finished; finished: printf("%d\n", total)You could rewrite this as:
printf("%d\n", 1 + 2 + 3 + 4 + 5);
The revised code would obviously run far faster.
There are no strict rules for low-level optimization; instead, there are tricks. You pick these up as you program. For example, consider math. If you only need integer math, then use integer arithmetic, as it's much faster than floating-point math. Simplify your equations. If you have (ax + bx) / (c/d), then simplify it to x*(a+b)*d/c. This reduces the instruction count from five to four and eliminates a division operation (which is typically slower than multiplication). There are plenty of low-level optimization tips available online.
The next step in low-level optimization is getting to know your library. There are many libraries, and each library call takes a varying amount of time. Most libraries are poorly written, taking far too long to perform their tasks even if they work. Use simple and fast library calls rather than complex ones.
Mid-level optimization involves algorithm analysis, often starting with Big O notation. Big O notation provides a general understanding of the runtime required to process a given number of records. Since most CPU time is spent on processing large record numbers, programmers realized they could generalize algorithms into runtime equations. For example, bubble sort has a typical runtime of n*n (where n is the number of records), while quicksort has a runtime of n*log(n). So, if you're dealing with a large number of items, quicksort is usually a better fit than bubble sort (under most scenarios). Big O notation considers only the largest term. So if you processed data using an n*n algorithm then a n*log(n) algorithm, big O notation would classify the combination as n*n.
Big O notation helps select algorithms. There are many ways to solve each problem. Each solution uses different algorithms. When deciding on which set of algorithms to use, calculate the big O notation for each and use that to select the algorithms with the best run time.
Although it originated with Big O notation, optimization has advanced since then. You can now create equations not only for runtime but also for memory usage, disk space, network I/O, file descriptors, and more. While the largest term usually dictates the best approach for large record numbers, there's often a limited record count. Suppose an algorithm deals with an average of one hundred fifty records. Analyze equations for each set of algorithms and use them to determine the most suitable approach.
Most complex projects have challenges. For instance, you might need to store terabytes of data per day and generate reports from it. You could be dealing with several gigabytes of RAM and require fast lookups. A website might need to handle tens of thousands of requests per second. These challenges often dictate which optimization type to prioritize. When dealing with terabytes of data, careful memory management is essential; loading all records into memory isn't feasible. Processing tens of thousands of web requests per second necessitates multiple servers due to port limits and timeouts. In the face of these challenges, the challenges usually take precedence.
High-level optimization occurs during the design phase. The aim is to create an elegant design, meaning a design that accomplishes a lot with minimal code and flawless functionality. Begin with a top-down design. While design tools like flowcharts and data flow diagrams are available, they won't facilitate elegant designs. An elegant design relies on simple concepts. Identify concepts that combine to solve the problem and write them down. Collections of concepts help organize the project and solve cases intrinsically.
For example, consider networking. Information flows between two computers. You need a way to verify the presence of the other computer, so use the concept of handshaking. Data integrity confirmation is necessary, which can be achieved through the concept of checksums. Requesting data error resends involves numbering packets sequentially for tracking. When dealing with multiple programs using the same connection, encapsulating packets with layered processing is a useful concept. If multiple computers are connected, adding the concept of computer addresses becomes essential.
Concepts combine to form complex systems. The more concepts you integrate, the more intricate the project becomes. Therefore, always optimize for concepts. This principle extends to non-programming problems as well. In physics or engineering, changing your perspective can lead to elegant solutions. For example, analyzing geometric symmetry can often elegantly solve electromagnetic problems.
Data compression offers an example of algorithm-level optimization. Since data often contains regular patterns, compression is possible. Various compression algorithms exist, from early Huffman encodings to modern methods. If your application stores many strings and doesn't require sorting them, you can compress the strings to save memory. When looking up a string, compress it first, then search for the compressed version. When sending the string to a user, decompress it before transmission. Encrypting and decrypting strings takes CPU cycles, but you may end up using less CPU within the algorithm and you certainly use less memory. Not every scenario is suitable for compression, so learn about it and apply it wisely.
A good example of concept-level optimization is 3D design. Suppose you are a programmer and you want to make some 3D designs, but this was twenty years ago when 3D design software was very expensive. You looked at available free software and it wasn't able to help. To proceed you would need 3D design software. However, if you write a traditional 3D editor, it would take years. So instead, you decide to write a programming library. Instead of polygon based 3D, you decide on equation based. You can use the concept of positive and negative shapes, where a positive shape adds material and a negative shape clears it. Add the concept of elemental shapes, for things like cubes or spheres. Add a concept of distorting space through a matrix to move, scale and distort shapes. Add the concept of intersection line, to calculate start and stop points where the line intersects with the shapes. That alone is enough to create the library. For a senior developer, it's a two day project. Creating an image of a layer is a simple algorithm. Since it uses an intersection line, you can also create a ray-tracing algorithm for it. This lets you visualize the object. You can do everything normally seen with ray-tracing too, such as reflectance and refractance. The ray-tracing algorithm is another two day project. If you had a resin 3D printer that uses a DVP, then the output from this library can be directly used in the printer. To create normal 3D software requires years. Conceptual optimization, if properly applied, shortens a project from years into a four day project.