One may wonder, what is the “right” amount of research and development (R&D)? Should a company spend a lot on potential future projects or not? It’s understood that most R&D expenditures will not yield fruit. The vast bulk will never generate profitable businesses or even come to market. How to decide who to underwrite among all those nerdy scientists doing inexplicable things in their labs?
Intel has returned to making memory chips, a business it was in before and left. While the old failed business was in DRAMs, the new business involves making flash, the non-volatile memory fueling the mobile revolution, the Internet of Things, and multilevel server memory architectures, not to mention some PC hard drives. The rise of this business is fortuitous because the company’s traditional volume business in PC processors has slowed. The server processors contributing so mightily to Intel’s bottom line are more profitable than the PC processors, but sell in quantities one-to-two orders of magnitude lower. The company needs to keep its factories filled to remain profitable. Perhaps the rise in memory volumes will help make up for some of the decline in PC processors. If not, Intel may have to consider doing foundry work for other firms’ chip designers. Today’s piece lays out the case.
There’s a certain irony in Intel’s return-to-growth financial projection for 2016 in that some of it comes from a resurgent memory business. Those of us with long memories can recall when Intel left that business, ceding it entirely to the Japanese, who had assembled a semiconductor manufacturing juggernaut.
At the time, 1985, the exit was a brave move, a huge gamble for Andy Grove and Gordon Moore, Intel’s president and CEO, respectively. Leaving the dynamic random access memory (DRAM) business meant that Intel’s revenues would plummet — memory manufacturing accounted for by far the largest proportion of its revenue at that point. Quite rightly, management understood that for every processor sold, many memory units would be attached to it, and since the economics of semiconductor manufacturing are a lot like cake-baking, better to sell more cakes than fewer.
However, the Japanese had figured this out, too, and what mattered was scale. Japanese firms had easy access to capital from friendly banking partners, and U.S. firms faced expensive capital. Remember those 20 percent interest rates? It seems like a dream now … A high dollar resulted from all that foreign capital flowing into the United States to take advantage of those high interest rates, and that helped kill off exports. Once the Japanese memory industry hopped on the cost curve, it was able to lower prices to the point where all rivals lost money.
Quality Japanese memories drove Intel to abandon them entirely, just in time to focus on the growing microprocessor business, which was taking off as PCs started to become popular. Processors were more complex than memories, and so commanded a higher price. But most importantly, Intel was the near-sole supplier for a standard that was about to proliferate around the world. Although the exit from DRAMs ultimately proved prescient, at the time it meant facing a Wall Street that often misses subtleties in explanations of why a technology company’s revenue is declining.
But if you live long enough, everything repeats itself. Well, repeats itself more like a spiral than a circle. Intel is back in the memory business, but that business has changed enormously. DRAM technology continues to evolve, but increasingly, system makers like to work with flash, which holds data without electric current but is still pretty fast. Flash is critical for mobile devices because it doesn’t use much power, and, with no moving parts, it’s hardier in the field. It plays a big role in the nascent Internet of Things (IoT) as well. In addition, it also figures into dense server architectures that employ tiers of memory and storage for fast, efficient computing.
Times have changed, though. With its now-unrivaled manufacturing scale, Intel is in a position to dictate the economics of the memory business rather than being victimized by them. This story’s unfolding will be fascinating to watch, particularly given that server processors sell in a ratio to endpoint processors somewhat similar to that of processors to memory chips; that is, one server can serve about 20 endpoints. This balance implies that Intel’s 14 manufacturing facilities will have plenty of room to make all the memories it wants.
In fact, Intel’s strong growth in servers, mirrored by a sharp decline in mobile and stationary endpoints, points to far fewer gâteaux being baked in its plants overall. Even a memory business growing like a weed might not be enough to fill those factories, which must be kept running at 90 percent capacity or greater to make money. It’s quite possible, then, that Intel will increasingly take on a foundry role in the industry, making chips that others have designed.
At the moment, the foundry business belongs to Taiwan Semiconductor Manufacturing Corp. (TSMC), Samsung, and Globalfoundries. Important customers for these services include Qualcomm, Apple, nVidia, and AMD. Like all things coming full spiral, there’s a yin-yang quality to the story: strengths are also weaknesses. Owning lots of factories is wonderful, but having to keep them filled is a Sisyphean task. Below a certain level of capacity, that great strength becomes a gargantuan weakness.