Moore’s Law Begins and Ends with Economics

on July 18, 2016
Reading Time: 4 minutes

Much has been written about the demise of Moore’s Law, the observation that the number of components in a dense, integrated circuit doubles every 24 months. This “law” has governed much of how we think about computing power since Gordon Moore penned his seminal paper in 1965.

Moore’s technological observation was made amid an economic analysis. Moore was ruminating on the sweet spot for the cost per transistor. In his famous projection, he wrote:

“There is a minimum cost at any given time in the evolution of the technology [and] the minimum is rising rapidly while the entire cost curve is falling. The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.”

It is misguided to think about Moore’s observation as a law. It is not now, nor was it ever, something set in stone. It was an estimation that became a self-fulfilling prophecy, not because it would happen regardless of what we do, but because engineers began building plans around it and companies began investing in it. The road map that has defined the industry was a timeline backed by significant financial muscle and resources.

For too long, Moore’s Law has wrongly been viewed as a technological rule. More accurately, it is an economic principle. Should Moore’s Law fail, it will fail because of business decisions, not technology inhibitors.

Yes, the physics of going below a few nanometers are hugely challenging and the industry hasn’t solved it (yet). But the challenge just makes the economics harder. The current road map extends to 2022 or so.

To remain on the path specified by Moore, we must make financial investments in physical and human capital that keep us on that path. But we are witnessing a potential slowdown of Moore’s Law because companies aren’t investing in the necessary research and development (R&D) that would maintain our historic trajectory.

What’s changing? First, the appetite for faster processors at all costs has been waning for some time. Today’s focus has shifted from raw computing muscle to diverse applications that require significantly less computing power.

This isn’t universal. There are certain areas such as rendering 360-degree virtual realities that require tremendous computing power. And faster computation can lead to new breakthroughs. Finding cures for debilitating diseases, for example. But, by and large, we’re seeing a decline in semiconductor content of devices. At the same time, we’re seeing semiconductor content proliferate into a million different newly digitized objects.

We are witnessing the rise of entirely new categories whose innovation is not predicated on the legacy of Moore’s Law. The evolution of wearables, the smart home and, broadly, all of the things encompassing with the Internet of Things is driving the next phase of computing power. In this phase, the focus is on price and basic functionality, rather than on the newest generation of — or fastest — chips.

In conjunction with the rise of diverse digital objects, we are also witnessing the demise of large markets for discrete devices. We are shifting from a few core tech categories enjoying high ownership and density rates to a world where ownership of digital devices is more diffuse and splintered.

This transformative trend is rarely mentioned, especially when we talk about the downfall of an observation that has held true for 50-plus years.

This isn’t to say discrete devices are going away. We will continue to use discrete, digital, connected devices, such as smartphones, tablets, and computers for many years to come. But, at the same time, the proliferation of digital objects everywhere has fragmented our tech world.

Rather than a small number of devices driving the bulk of the semiconductor market, we now see smaller volume in more categories. The innovation we see today doesn’t need to propel Moore’s Law forward like the innovation of the past did. There’s not enough volume in a few well-defined categories to pay for the migration between technologies needed to sustain and justify Moore’s Law and the chips we have available to us today are largely accomplishing the new tasks being digitized.

Digitizing everything is going to demand a lot of silicon. Silicon is where a lot of the magic happens. It’s where the processing and storing of information takes place. In the past, we wanted to do more processing and storing than we had the computing muscle to handle. This drove investment to bring that future to us.

The silicon demanded today isn’t the type we don’t have — the kind that drives investment in R&D — it’s of the type we already have. While we are doing more computing than ever before, we are doing less computationally challenging computing.

We are moving into a dichotomy of silicon less densely concentrated in a limited number of devices — such as smartphones, laptops, and tablets — and into a world of silicon everywhere. We are seeing the demise of discrete device markets and the rise of the cloud.

Data centers now account for a growing share of revenue for the semiconductor industry, tangible evidence of the shift from discrete hardware to software. We even see this shift affecting companies. Businesses are using software to lower capital expenditure investment — explaining, in part, why capital expenditure is growing more slowly than economists expect.

In the past, companies needed to make large capital investments in order to grow their businesses. Today, however, companies can scale at a fraction of the historical cost by leveraging services such as cloud computing and taking advantage of the components already on the market.

Economics, not physics, is the root cause of the demise of Moore’s Law. The lines are blurring between the physical world in which we live and the digital world encroaching on every corner of our lives.

The economic paradigm we now face is driven by the world we are entering. Our new paradigm is a decidedly different migratory path than we have experienced in the past.