This is what Moore’s Law means
The law was formulated by Intel founder Gordon Moore around the 1960s. Moore’s Law refers to the regular control of computer performance. Studies confirmed the law in 1997—but strictly speaking, it is not a law at all.
- Moore’s Law states that the performance of computers and technical devices doubles approximately every 18 months. It is not a real law, but rather a rule of thumb.
- Originally, a doubling was planned after one year. However, after further review, this was extended to two years.
- In 1965, Gordon Moore established the rule based on his theory that technical and economic factors interact in the development of technical devices and integrated circuits. Costs remain unchanged during this period. According to this theory, increasingly powerful devices are created without prices rising.
- Many chip manufacturers have complied with this law, as they have followed the advice of Intel’s founder. The products were manufactured according to Moore’s Law and the prediction has come true.
The origin of Moore’s Law
Gordon Moore was a co-founder of Intel and a pioneer of the semiconductor industry. In the 1960s, he worked at Fairchild Semiconductor, one of the birthplaces of the modern microchip.
- In 1965, Moore published a short essay in Electronics Magazine in which he observed that the number of transistors on integrated circuits was doubling approximately every year – while production costs remained the same.
- This observation was initially an empirical finding, not a physical law. It was only later that it became known as “Moore’s Law” and served as a guideline for innovation and planning for the entire computer industry. In the following decades, Moore adjusted his estimate to a doubling every 18 to 24 months.
Technical effects – miniaturization and performance enhancement
Moore’s Law shaped the development of microelectronics for decades. With each new generation of chips, more transistors could be accommodated in a smaller space.
- Structure sizes decreased from several micrometers in the 1970s to a few nanometers today. This miniaturization not only increased computing power, but also energy efficiency.
- At the same time, the cost per transistor fell dramatically. These advances enabled the explosive development of PCs, smartphones, cloud computing, and artificial intelligence.
Limits of Moore’s Law
Since the 2010s, Moore’s Law has increasingly been reaching its physical and economic limits.
- Transistors today are only a few nanometers in size – smaller structures are difficult to manufacture stably because quantum and thermal effects play an increasingly important role.
- In addition, the development costs of new manufacturing processes are rising rapidly. The clock speeds of modern processors are also hardly increasing because heat generation must be limited. Manufacturers are therefore increasingly relying on parallel processing and specialized chips to achieve performance gains.
Alternatives and further developments
In order to go beyond the limits of classic miniaturization, researchers and industry are working on new concepts – the so-called More-than-Moore approaches. These include:
- 3D chip designs, in which multiple layers of transistors are stacked on top of each other,
- neuromorphic chips, which mimic the human brain,
- quantum computers, which work with qubits instead of bits and enable completely new computing models,
Effects on modern industries
The end of Moore’s Law as we know it has far-reaching implications:
- In artificial intelligence, progress must increasingly be achieved through better algorithms rather than faster hardware.
- In the cloud industry, energy requirements are rising sharply, making efficiency and cooling key issues.
- In consumer electronics, innovation cycles are slowing down, which is promoting longer product lifecycles and new business models.