Explore my works and
side projects  here

Research, Design & Development.

DRY “Don’t repeat yourself” principle. Is probably one of the worst code suggestions, it’s been taken to the extremes. Stop making abstractions on top of abstractions or even services on top of services..

Just like “CLEAN” code principle can have huge performance issues at X amount on computing. It’s easy to lose sight on performance and get caught up on the latest greatest jargons and libraries when they offer little to none over fast and quick way to write code.

Over engendered, over bloated code. Should be the biggest reg flag in software design period!

It’s essential to strike a balance between clean code principles and performance considerations based on the specific requirements of the project. In many cases, clean code and good performance are not mutually exclusive and can be achieved through careful design, optimisation, and testing.

For example take the Fast Inverse Square Root, Fast InvSqrt(), 0x5F3759DF, Algorithm.
See more about Magic Numbers.

If it wasn’t for this performance orientated algorithm (and other code snippets from the Quake codebase) the game would never have run as fast and become so legendary.

The algorithm is a well-known method for approximating the inverse square root of a floating-point number, typically used in computer graphics and game programming for operations involving three-dimensional vectors.

It’s based on a clever bit manipulation trick and exploits the IEEE 754 floating-point representation to approximate the inverse square root quickly.

The following code is the fast inverse square root implementation from Quake III, stripped of C preprocessor directives, but including the exact original comment text:

float q_rsqrt(float number)
{
  long i;
  float x2, y;
  const float threehalfs = 1.5F;

  x2 = number * 0.5F;
  y  = number;
  i  = * ( long * ) &y;                       // evil floating point bit level hacking
  i  = 0x5f3759df - ( i >> 1 );               // what the fuck?
  y  = * ( float * ) &i;
  y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
  // y  = y * ( threehalfs - ( x2 * y * y ) );   // 2nd iteration, this can be removed

  return y;
}

More about it here.


My version in Javascript for the hell of it :-)

function fastInverseSquareRoot(x) {
    let xhalf = 0.5 * x;
    let i = new Float32Array([x]); // Convert floating-point number to an integer array
    i = new Int32Array(i.buffer);  // Get the integer representation

    i = 0x5f3759df - (i[0] >> 1); // Initial guess for Newton's method
    i = new Float32Array(new Int32Array([i]).buffer); // Convert the integer back to floating-point

    x = i[0] * (1.5 - xhalf * i[0] * i[0]); // One iteration of Newton's method
    return x;
}

// Example usage
let result = fastInverseSquareRoot(4.0);
console.log(result); 
// outputs: 
// 0.49915357479239103


I created this performance test on jsfiddle to further explain the point I am making with all the types of typical javascript Array loops, the traditional Vanilla JS for loop (++) outperforms them all! from 2 milliseconds to 17/20…why is it best practice to use Map or Filter in React now? :-)

"Testing for loop (++)..."
"Execution time for for loop:", 2, "milliseconds"

"Testing forEach method (array.forEach())..."
"Execution time for forEach method:", 6, "milliseconds"

"Testing for...of loop (for...of)..."
"Execution time for for...of loop:", 3, "milliseconds"

"Testing map method (array.map())..."
"Execution time for map method:", 17, "milliseconds"

"Testing filter method (array.filter())..."
"Execution time for filter method:", 20, "milliseconds"

https://jsfiddle.net/kurtgrung/ubgcwL0m/18/


Further reading Newton’s Method.

Or read this Unconventional algorithms, hacks & code for performance optimisation.


TD;LR ;-)

“CLEAN” code principle, which stands for principles like clarity, simplicity, maintainability, and readability, does not inherently imply sacrificing performance. In fact, clean code often leads to better performance in the long run because it is easier to understand, maintain, and optimise (but there are pitfalls).

However, there might be scenarios where prioritising clean code over performance could lead to issues, particularly in highly performance-critical systems or in cases where extreme optimisation is required. Here are some potential scenarios where adhering strictly to clean code principles might lead to performance issues:

  1. Premature Optimization: Focusing too much on clean code upfront without considering performance requirements might lead to overlooking optimization opportunities. In some cases, it’s better to write code that works first and then optimise where necessary.
  2. Abstraction Overhead: Clean code often involves creating abstractions and layers of indirection for improved maintainability and clarity. However, these abstractions can introduce overhead, especially in performance-critical sections of code.
  3. Generality Over Specificity: Writing general-purpose, reusable code may introduce overhead compared to specialized solutions tailored for specific use cases. Clean code often prioritises generality, which might not always align with performance requirements.
  4. Avoidance of Low-Level Optimizations: Clean code principles often discourage low-level optimizations or “micro-optimizations” in favor of clarity and maintainability. While these optimisations might provide performance benefits, they can also make the code harder to understand and maintain.
  5. Excessive Memory Usage: Clean code may emphasize clarity and readability over memory efficiency. In certain cases, this can result in excessive memory usage, which may impact performance, especially in memory-constrained environments.
  6. Over-Reliance on Frameworks or Libraries: Clean code encourages the use of frameworks and libraries to promote code reuse and maintainability. However, these dependencies may come with performance overhead, especially if they’re not optimised for specific use cases.
  7. Minimalist Algorithms: Clean code often favours minimalist algorithms that are easier to understand but may not be the most efficient in terms of performance. In some cases, more complex algorithms might be necessary for optimal performance.