Code efficiency

Year over year the (computer) systems we work with, are becoming faster and faster. Still the software doesn’t give the feeling that the speed increases. And worse, when hardware isn’t upgraded quickly enough the system will become slower. 

We all know the great new feeling of a new computer with a fresh operating system. Or the new phone, fresh from the packaging. They feel fast and it’s exciting to use them. 

So how is it then, that the same system 3 years later loses that feeling. Even re-installing the phone or the computer with a fresh operating system won’t give that same exciting feeling as the same system did when you unpacked the first time. 

So, how is that possible? I.e., the hardware didn’t change, so how come that it still feels slower over time. It’s not like the system has more functions that are used on a day-to-day basis. 

One of the main causes is the fact that software, operating systems and such, slowly get more functionality. This is causing a unique situation. The basic functionality of i.e. an operating system is stable and available for years and, basically hasn’t really changed over time. 

Still more functionality is added, but the functionality isn’t something that everybody needs. But still, this added functionality means more code need to be processed, just to add that extra menu item. Or to perform that extra check to show or hide an icon etc. 

One of the most illustrative examples has probably been the introduction of the Windows Aero Theme, when windows 7 was introduced in 2009. When the system was fast enough, the Aero theme would be displayed by default. The interface didn’t bring any new functionality in the technical sense, but it still required a system that was fast enough and had a video card that could support some specific functionality to make it draw the screens fast enough. 

The way software is written, has a big influence on this all. Especially the last few years this is becoming really clear with the increased use of tools like electron, and the general increased use interpreted languages like JavaScript for everything. And someone might say, with interpreted languages, that is logical, as a computer needs to do more to perform the desired action. 

This is a reasonable assumption, except so called compiled languages have similar issues. A famous language like C++ uses virtual call tables, to indirectly call class methods. Other languages, use similar techniques. The reason why we use languages like C++ / C# / Java and others, is because they make it easier to code the software. And as the programming languages evolve, they slowly introduce more indirection. This indirection makes the languages more powerful, but also require more compute power to execute the program. At the same time, these innovations became possible, because the computers have more power.

So, there is an interesting cycle going on here, because systems have become faster, creating programs, has become easier. And the reason this is becoming easier, as we use technologies that require that addition computer power. 

Whether there is a solution for this, I don’t know. For sure we, developers, can do a better job when it comes to programming efficient, and use tools that are optimally suited for the problem we are trying to solve.