Much has been written about the demise of Moore’s Law, the observation that the number of components in a dense, integrated circuit doubles every 24 months. This “law” has governed much of how we think about computing power since Gordon Moore penned his seminal paper in 1965.
Moore’s technological observation was made amid an economic analysis. Moore was ruminating on the sweet spot for the cost per transistor. In his famous projection, he wrote:
“There is a minimum cost at any given time in the evolution of the technology [and] the minimum is rising rapidly while the entire cost curve is falling. The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.”
It is misguided to think about Moore’s observation as a law. It is not now, nor was it ever, something set in stone. It was an estimation that became a self-fulfilling prophecy, not because it would happen regardless of what we do, but because engineers began building plans around it and companies began investing in it. The road map that has defined the industry was a timeline backed by significant financial muscle and resources.
For too long, Moore’s Law has wrongly been viewed as a technological rule. More accurately, it is an economic principle. Should Moore’s Law fail, it will fail because of business decisions, not technology inhibitors.
Yes, the physics of going below a few nanometers are hugely challenging and the industry hasn’t solved it (yet). But the challenge just makes the economics harder. The current road map extends to 2022 or so.
To remain on the path specified by Moore, we must make financial investments in physical and human capital that keep us on that path. But we are witnessing a potential slowdown of Moore’s Law because companies aren’t investing in the necessary research and development (R&D) that would maintain our historic trajectory.
What’s changing? First, the appetite for faster processors at all costs has been waning for some time. Today’s focus has shifted from raw computing muscle to diverse applications that require significantly less computing power.
This isn’t universal. There are certain areas such as rendering 360-degree virtual realities that require tremendous computing power. And faster computation can lead to new breakthroughs. Finding cures for debilitating diseases, for example. But, by and large, we’re seeing a decline in semiconductor content of devices. At the same time, we’re seeing semiconductor content proliferate into a million different newly digitized objects.
We are witnessing the rise of entirely new categories whose innovation is not predicated on the legacy of Moore’s Law. The evolution of wearables, the smart home and, broadly, all of the things encompassing with the Internet of Things is driving the next phase of computing power. In this phase, the focus is on price and basic functionality, rather than on the newest generation of — or fastest — chips.
In conjunction with the rise of diverse digital objects, we are also witnessing the demise of large markets for discrete devices. We are shifting from a few core tech categories enjoying high ownership and density rates to a world where ownership of digital devices is more diffuse and splintered.
This transformative trend is rarely mentioned, especially when we talk about the downfall of an observation that has held true for 50-plus years.
This isn’t to say discrete devices are going away. We will continue to use discrete, digital, connected devices, such as smartphones, tablets, and computers for many years to come. But, at the same time, the proliferation of digital objects everywhere has fragmented our tech world.
Rather than a small number of devices driving the bulk of the semiconductor market, we now see smaller volume in more categories. The innovation we see today doesn’t need to propel Moore’s Law forward like the innovation of the past did. There’s not enough volume in a few well-defined categories to pay for the migration between technologies needed to sustain and justify Moore’s Law and the chips we have available to us today are largely accomplishing the new tasks being digitized.
Digitizing everything is going to demand a lot of silicon. Silicon is where a lot of the magic happens. It’s where the processing and storing of information takes place. In the past, we wanted to do more processing and storing than we had the computing muscle to handle. This drove investment to bring that future to us.
The silicon demanded today isn’t the type we don’t have — the kind that drives investment in R&D — it’s of the type we already have. While we are doing more computing than ever before, we are doing less computationally challenging computing.
We are moving into a dichotomy of silicon less densely concentrated in a limited number of devices — such as smartphones, laptops, and tablets — and into a world of silicon everywhere. We are seeing the demise of discrete device markets and the rise of the cloud.
Data centers now account for a growing share of revenue for the semiconductor industry, tangible evidence of the shift from discrete hardware to software. We even see this shift affecting companies. Businesses are using software to lower capital expenditure investment — explaining, in part, why capital expenditure is growing more slowly than economists expect.
In the past, companies needed to make large capital investments in order to grow their businesses. Today, however, companies can scale at a fraction of the historical cost by leveraging services such as cloud computing and taking advantage of the components already on the market.
Economics, not physics, is the root cause of the demise of Moore’s Law. The lines are blurring between the physical world in which we live and the digital world encroaching on every corner of our lives.
The economic paradigm we now face is driven by the world we are entering. Our new paradigm is a decidedly different migratory path than we have experienced in the past.
The author does an admirable job highlighting the context under which Moore’s Law came to being, and how it’s utilized. The fact of the matter is that there is indeed a lower limit and it’s not infinitesimally small. If we’re talking silicon the lower limits is about 0.2 nm (depending on crystal structure) of the silicon bond. That won’t change, no matter how many MBAs are thrown at it, and it will only slightly change, in absolute terms, depending on how many scientists and engineers are thrown at it. (I say absolute terms because as you remember, you can’t put percent in the bank).
The outcome of this is a sort of scientific version of diminishing returns, meaning more and more scientific effort will be needed to make smaller and smaller improvements. A breakthrough is actually defined as something that breaks this trend. Moore’s Law is dead, when a breakthrough happens then there may or may not be a “Law” that replaces it for the economists.
“First, the appetite for faster processors at all costs has been waning
for some time. Today’s focus has shifted from raw computing muscle to
diverse applications that require significantly less computing power.”
That right there is a fundamentally flawed understanding of Moore’s law, which results in a rather incoherent article that seems to not really grok what’s going on in the microchip industry, or how Moore’s law drives the tech industry.
Moore’s law says that the number of transistors per area doubles every (time period), which therefore causes the cost per transistor to halve in the same time period. The time period started out at one year, then stretched to two; lately it looks like it’s stretching again to three or four years. Note that the unspoken assumption is that the cost of microchips per unit area holds steady while the transistor density goes up.
Three additional things not touched on by Moore were true for a long time — first, the electrical power required to run each transistor went down every time you increased the transistor density. Second, you could increase the computing power of a CPU by increasing its clock speed. Third, you could also increase computing power by spending transistors in a fairly linear fashion. Everything about computer technology from 1970 to around 2004 was built on this four way engine combining exponential growth in density at constant cost, plus constant decline in power requirements and constant increases in computing power. Everything in computing got faster and cheaper every couple of years.
The four way engine started to falter in 2004, when the 90nm process node failed to deliver any gains in electrical efficiency. Electrical leakage from transistors went up dramatically compared to the previous node, which meant that power and cooling requirements went up, not down. It became impractical to make a chip that ran faster than around 3.5 ghz without resorting to heroic cooling apparatus.
Suddenly, instead of coming for free with the jump to a more dense node, you had to expend engineering effort to make the per transistor power requirements continue to go down. And suddenly, you had to balance computing power with electrical consumption. Making a more powerful chip that required more than 100 watts to run was simply not in the cards: the size of the heatsinks and cooling fans required were at odds with the demands of the market, which wanted smaller, more portable computers and servers that cost less electricity to run.
The entire microchip industry shifted its polestar from raw speed to efficiency. The 65nm generation of CPUs were only somewhat more powerful, but they used dramatically less power per computation. Further shrinks have delivered more efficiency gains, but only due to sustained engineering effort to keep leakage under control and find more efficient transistor designs. The speed component of the engine driving the tech industry went from exponential improvements to a much flatter line of gradual improvement. And all that extra engineering meant the cost per unit area of chip started to increase instead of holding steady.
In the past five or so years, the increasing cost per square centimeter has overtaken the cost reduction given by shrinking a chip. Microchip factories have become fantastically expensive, so only a tiny handful of companies can afford to build them at all. And simpler, cheaper chips (microcontrollers and the like) are still being made at 32 or 28 nm, because going to the next smaller node requires paying for double patterning, which would increase the per-chip cost too much.
But for other chips, there is a need for ever more compact transistors and ever more efficient designs, to deliver more performance, more memory and more storage. The server market, in particular, has an insatiable desire for more RAM memory, more NAND storage, and more CPU performance — but not at the cost of higher power requirements. Cell phones and smart phone makers, similarly, have an insatiable desire for ever more efficient cellular radios, RAM, NAND, and ever more efficient and powerful SOCs, so they can make phones that last longer per charge, that deliver more computing performance per charge, and that need smaller batteries. And these two markets are willing to pay the higher costs of the denser process nodes to get what they want.
So, the server market and the mobile market continue to drive the two or three companies that can still afford the staggering costs of building state of the art fabs, to continue their quest to double transistor density every few years. Moore’s law is still going strong, if more slowly, and will continue to operate through a few more process nodes before quantum tunneling makes it impossible to go further. It’s just not being driven by the same simple “denser transistors = cheaper transistors = more powerful chips” economic formula that worked from 1970 to 2000.
Totally agree Moore’s law is still going strong and remains one of the strongest underlying forces of the industry. It’s just showing up in a different way.
One of the problems of the modern software is memory leaks. The software runs in a loop and if at the end of the loop it doesn’t clean up all the memory it allocated in the beginning of the loop, it needs to take a “fresh” memory from the heap. Eventually the software runs out of memory and crashes. This problem is especially critical in mobile since memory footprints there are smaller.
When I left Nokia in early 2000s the software run on Symbian. In the early days Symbian tried to vet software submitted by the developers by checking some simple rules in the source code. I came up with an idea to add a plug-in in a compiler that would check for certain things including memory leaks. I am very far from the server software, I imagine they should have a similar tool to run static checks. If there is a quality assurance embedded in the compiler, then there is no need to allocate huge resources and processors can be smaller.
Memory leak isn’t only the problem of modern software. It’s the problem in software development, of all time. It happens since the beginning of software development history, and still happens today. Hence the creation of automatic memory manager, whether in the form of GC (garbage collector) or ARC (automatic reference counting). Most modern compilers today usually have had tool to detect memory leaks.
And I don’t think this memory leak has anything to do with computing power and Moore’s Law that are being discussed in this article.
As I say, I haven’t been into the application development for a while. I am constantly amazed though at the imminent growth of the sizes of mobile applications.
Steve Jobs, when pressed by Walt Mossberg on whether iOS really had the same underpinnings as Mac OS X, convincingly said that the bulk of modern applications was actually graphics, fonts and other media assets. When you stripped them out, the OS became actually very small; small enough to fit on the original iPhone.
The reason why mobile apps are so large is probably because they have to drive insanely high res retina displays, and they have to look great doing it. Memory leaks, or simply huge memory consumption is a result of managing so many pixels.
The issue isn’t memory leaks in the strict sense of the word (failure to clean up memory). It’s just that the OS and the apps have evolved to take advantage of the huge processing power that Moore’s law has provided.
If the application software smartly utilized iOS graphic assets, there would be no need to use memory draining custom graphics.
The issue with memory leak is when the program runs out of allocated memory it resets behind the scenes. The user don’t see the reset – the UI stays unresponsive. To not frustrate the user for the long duration of time, the processors need to become faster to overcome the reset. Faster speed requires more power, bigger batteries and so on.
I was not aware that programs that run out of memory reset behind the scenes. I thought they were terminated.
https://developer.apple.com/library/ios/technotes/tn2151/_index.html#//apple_ref/doc/uid/DTS40008184-CH1-UNDERSTANDING_LOW_MEMORY_REPORTS
In this case, it is up to the user to restart the application in question.
Of course, in this case it is not handling memory leaks per se, but it is handling legitimate out of memory situations caused by loading too much into memory.
As for software utilising iOS graphic assets, that would be true for a text base app. However, if your app needs to scroll smoothly through lots of retina images, using iOS graphic assets won’t help you.
If by “resets behind the scenes”, you were referring to Garbage Collection and not a total restart of the app, then you are correct of course. However, note that Apple decided not to implement GC for iOS and even deprecated it for MacOS X as well. The reasons are exactly as you say.
So yes, what you are referring to as a “memory leak” can be a problem when coding for Android using Java (which uses a GC). It tends to be less of an issue (or is at least manageable for the more savvy programmers) on iOS.
No, a program contain several pieces and one part can reset independently. I am referring to how an application works in general without regard to iOS or Android.
Thank you for your blog article.Thanks Again. Really Great.
Im grateful for the article.
Wow, great blog article.Really thank you! Awesome.
Pretty! This has been a really wonderful post. Many thanks for providing these details.
As I website owner I believe the content material here is really good , appreciate it for your efforts.
Im obliged for the blog article. Great.
Thanks again for the article.Much thanks again. Keep writing.
Really enjoyed this blog post.Really looking forward to read more. Really Great.
Thanks a lot for the blog post.Thanks Again. Will read on…
Say, you got a nice blog article.Much thanks again. Want more.
I am so grateful for your article post.Really thank you! Really Great.
I really liked your blog post.Really thank you! Want more.
Thanks for sharing, this is a fantastic post. Great.
Great, thanks for sharing this blog post.Really looking forward to read more. Keep writing.
If some one desires expert view on the topic of running a blog after that i propose him/her to visit this blog, Keep up the pleasantwork.
Very nice post. I just stumbled upon your blog and wanted to say that I’ve really enjoyed surfing around your blog posts. In any case I’ll be subscribing to your feed and I hope you write again soon!
You actually expressed this adequately!250 word college essay uk dissertation copy writing services
Appreciate you sharing, great blog.Really looking forward to read more. Much obliged.
Thanks for a marvelous posting! I truly enjoyedreading it, you will be a great author.I will be sure to bookmark your blog and may come backlater in life. I want to encourage you continue your greatwriting, have a nice afternoon!
Asking questions are truly fastidious thing if you are not understanding anything totally, except this piece ofwriting provides pleasant understanding even.
Truly loads of valuable material!how to write a short essay essaytyper online thesis writing services
When someone writes an paragraph he/she keeps the thought ofa user in his/her brain that how a user can be aware of it.Therefore that’s why this post is amazing.Thanks!
It’s fantastic that you are getting thoughts from this paragraph as well as from our discussion made at this place.
Thank you for your article. Will read on…
Major thanks for the blog post.Really thank you! Will read on…
Very informative article post. Great.
For the reason that the admin of this site is working, no uncertainty very quickly it will be renowned, due to its quality contents.
Hello there! This article couldnít be written any better! Going through this post reminds me of my previous roommate! He always kept talking about this. I’ll send this post to him. Pretty sure he will have a good read. Thanks for sharing!
Looking forward to reading more. Great blog.Thanks Again.
I loved your blog post.Really looking forward to read more. Really Cool.
Hello, this weekend is pleasant designed for me, because this time i am reading this great informative paragraph here at my residence.
This is a good tip particularly to those new to the blogosphere.Brief but very accurate info
I value the article post.Much thanks again. Great.
Thank you for your blog article.Thanks Again.
wow, awesome blog post.Much thanks again. Really Cool.
I constantly emailed this website post page to all
my contacts, since if like to read it after that my contacts will too.
A big thank you for your blog post. Fantastic.
Thanks for the blog article. Want more.
I value the blog.Really looking forward to read more. Really Great.
Thank you for your blog post.Really thank you! Really Great.
I cannot thank you enough for the blog article.Much thanks again. Cool.
Very informative blog.Thanks Again. Great.
Hey, thanks for the blog post.Really thank you! Will read on…
I value the blog article.Really looking forward to read more. Want more.
Thanks-a-mundo for the blog post.Really looking forward to read more. Cool.