What’s the Right Amount of R&D?

One may wonder, what is the “right” amount of research and development (R&D)? Should a company spend a lot on potential future projects or not? It’s understood that most R&D expenditures will not yield fruit. The vast bulk will never generate profitable businesses or even come to market. How to decide who to underwrite among all those nerdy scientists doing inexplicable things in their labs?

A lot of it depends on where in the value chain you are. In technology, at the deep-science end of the spectrum, high R&D budgets are de rigueur. If you merely use others’ technology to manufacture products, less so. And if you’re essentially a channel for others products, even less.

Here are recent stats on select company R&D expenditures, taken from their 2016 income statements, expressed as a percentage of total revenue, ranked high to low:

AMD: 23.60%
Qualcomm: 21.87%
Intel: 21.45%
nVidia: 21.17%
Microsoft: 14.05%
Tesla: 11.91%
IBM: 7.20%
Apple: 4.66%
First Solar: 4.23%
Hewlett-Packard Enterprise (HPE): 4.58%
Hewlett-Packard, Inc. (HPQ): 2.51%
Public Service Enterprise Group: 0.0%
Verizon: 0.0%

They’re almost entirely in the order you’d expect. The semiconductor companies have to bet the farm every year just to stay in business. Finding the next big thing in process technology could mean life or death. Interesting to see that the fabless and fab-owning companies spend nearly the same amount. The implication is that even fabless companies need to engineer aspects of a semiconductor foundry and that most foundry cost is plant and equipment rather than R&D. They make a tight pack, this group as if they were all following a formula.

Next, down the list, Microsoft, the original intellectual property (IP) company, has always invested heavily in its future. In the fat old days, around 1999, when Microsoft had roundly defeated Apple for the PC market, wrested control of computing’s future from IBM, and not yet been attacked for antitrust violations, Microsoft’s R&D budget was 15.04% of revenue, barely above where it is today. Microsoft sits high upstream on the IP river. Licensing its technology to others is its main business. Its innovation pipeline is a cornerstone of its business model. The company has to keep this conduit moving.

Tesla cracks double digits, but is surprisingly not higher, given that its field of endeavor didn’t exist at all until just recently. The company has hundreds of patents related to electric vehicles. Perhaps the answer lies in a cross-license that Tesla donated to the public domain in 2014. Offering this cross-license enabled Tesla to use others’ patents (notably those of automobile manufacturers), thus relieving some of the pressure on its own labs to come up with everything needed to build a complex — and ever-evolving — system of systems.

IBM has a long tradition of R&D and many esteemed scientists on staff. Although years of declining revenue have taken a toll, IBM still prioritizes R&D, continuing to work on advanced semiconductor processes (even after having sold its silicon fabs to GlobalFoundries). In addition, the company maintains major programs working on, among other things, the development of brain-like software and quantum computing systems.

Apple’s ratio needs some delving. At first glance, 4.66% seems strikingly low, given how many new things Apple has brought to market. One way to look at it: you can only get so much innovation for any amount of money. With $215 billion in annual revenue, Apple’s R&D expenditure may be low percentage-wise, but, at $10.0 billion, the dollar amount is not much below Intel’s $12.7 billion. Also, Apple’s business is built on many technologies, components, and engineering support from companies like Microsoft, IBM, Qualcomm, and others, indicating that despite projects like advanced semiconductor development, Apple’s business model does rely on others’ IP, allowing it to share development costs and forego some R&D investment.

First Solar again surprises. Its R&D expenditures are not very high, both in percentage and dollar terms. Perhaps it is budget constrained, but the cutting edge of solar technology is ripping right along, and further investment will be required if the company is to remain competitive.

Ah, the next two, the evil twins. HPE and HPQ used to be one, and they didn’t always have such tight budgets. Reaching back in history, I found my notes from a talk given by Phil McKinney, at the time vice president and chief technology officer of Hewlett-Packard’s (HP’s) Personal Systems Group. He was addressing an analyst council that I attended in 2006. In his lecture, he described a more-focused approach to R&D than been undertaken before. He talked about “70% dedicated to core, 20% to adjacencies, and 10% totally new.” He described a pipeline that took initial research from a corporate incubator to hosting in the product groups (assuming a given project made it).

McKinney described the recent creation of an Innovation Program Office, independently budgeted and separate from the operating groups. A researcher from that office could fairly easily initiate a project lasting 30-90 days and costing $100,000. At that point, it had to be reviewed and qualified by an internal board. If it passed muster, the team was expected to produce a prototype and a target customer set for $100,000 to $1 million in the next 60-180 days. In the following 90-180 days, assuming the project was still a go, the team could spend $1-5 million on product development and early-adopter engagement. Then, if market and technical signals were right, began a calibration and expansion phase, which could cost $5-20 million. Somewhere around that last step, the project and its budget were transferred to the appropriate operating group.

Sounds rational, right?

Just for laughs, I looked at HP’s R&D expenditures from the wild west days before then-CEO Mark Hurd reined things in a bit. Well, maybe more than a bit. People at the time credited Hurd with dismantling the great HP innovation machine. But the fact is, a lot of Ph.D.s on staff were pursuing wacky science projects with not much oversight. Even then, the 5.46% of revenue spent on R&D in 2002 was still rich by today’s standards. In 1997, before HP’s expensive merger with Compaq (and various R&D cuts by Hurd’s predecessor, Carly Fiorina), that figure was 8.39%. Now, remember, HP had unique intellectual property in the printer business, but much of the R&D supporting one of HP’s other large divisions, Personal Systems, was being executed by HP suppliers Microsoft (operating and application software) and Intel (processors). Thus, the 8.39% was rather high, supporting a raft of “science projects” that would never turn into real businesses.

Finally, the last two.

Public Service Enterprise Group is a public utility. Most of the technologies it uses are tried and true, and in the rare cases that a new technology ends up in the mix, it comes from a specialty company or one of the big players at the top end of the supply chain. Thus, it has no R&D budget.

I checked with Verizon just to be sure, and here is what corporate communications staffer Kimberly Ancin wrote me:

“Zero is actually a correct number. We don’t do any R&D ourselves, we’re not a manufacturer so we don’t have an R&D budget and therefore don’t report any expenditures for R&D in our earnings. In wireless, for example, the handset manufacturers and other suppliers do R&D.”

So, what is the right amount of R&D investment? How can a company ensure a return on its efforts to stay on the cutting edge of tech innovation? It’s hard to have confidence, to know you’re on the right track, even as potentially billions of dollars wash out the door.

Big, industry-building innovations always come at significant risk and cost. A lot of R&D and investment are required upfront, and the subsequent payoff is uncertain, even if the invention succeeds. Companies that make game-changing innovations have a choice about how to commercialize their technologies. They can try to make a go of it alone, introducing a proprietary version of a product based on their IP, or they can seek standardization and broad distribution through licensing.

Examples of companies choosing the former include Segway, whose growth in personal transport vehicles was stunted by its proprietary strategy, and Sun Microsystems, which was acquired by Oracle when it lost the scale battle to the x86 camp.

A good example of the open, standardized regime occurs in wireless telephony, where Ericsson, Motorola, Nokia, and Qualcomm chose to invest up front, cross-license or otherwise make their IP commonly available, and charge license fees to manufacturers to use their technologies. In this way, the companies were able to foster a vibrant ecosystem, spending tens of billions of dollars on IP, which they were then able to recover — sometimes years later — when their customers sold products based on that IP.

It seems pretty obvious from these examples that open licensing is a superior market model to the proprietary approach, particularly with complex technologies covering a broad range of interrelated areas that must be tuned to each other for the entire system to work. A proprietary approach works only if the company taking it becomes a monopoly, and monopolies are inefficient for a variety of reasons. Most lasting industries (computers, mobile phones, automobiles, oil & gas, electricity) are based on a broad regime of technology patents and open licensing.

Memory Goes and Comes

Intel has returned to making memory chips, a business it was in before and left. While the old failed business was in DRAMs, the new business involves making flash, the non-volatile memory fueling the mobile revolution, the Internet of Things, and multilevel server memory architectures, not to mention some PC hard drives. The rise of this business is fortuitous because the company’s traditional volume business in PC processors has slowed. The server processors contributing so mightily to Intel’s bottom line are more profitable than the PC processors, but sell in quantities one-to-two orders of magnitude lower. The company needs to keep its factories filled to remain profitable. Perhaps the rise in memory volumes will help make up for some of the decline in PC processors. If not, Intel may have to consider doing foundry work for other firms’ chip designers. Today’s piece lays out the case.

There’s a certain irony in Intel’s return-to-growth financial projection for 2016 in that some of it comes from a resurgent memory business. Those of us with long memories can recall when Intel left that business, ceding it entirely to the Japanese, who had assembled a semiconductor manufacturing juggernaut.

At the time, 1985, the exit was a brave move, a huge gamble for Andy Grove and Gordon Moore, Intel’s president and CEO, respectively. Leaving the dynamic random access memory (DRAM) business meant that Intel’s revenues would plummet — memory manufacturing accounted for by far the largest proportion of its revenue at that point. Quite rightly, management understood that for every processor sold, many memory units would be attached to it, and since the economics of semiconductor manufacturing are a lot like cake-baking, better to sell more cakes than fewer.

However, the Japanese had figured this out, too, and what mattered was scale. Japanese firms had easy access to capital from friendly banking partners, and U.S. firms faced expensive capital. Remember those 20 percent interest rates? It seems like a dream now … A high dollar resulted from all that foreign capital flowing into the United States to take advantage of those high interest rates, and that helped kill off exports. Once the Japanese memory industry hopped on the cost curve, it was able to lower prices to the point where all rivals lost money.

Quality Japanese memories drove Intel to abandon them entirely, just in time to focus on the growing microprocessor business, which was taking off as PCs started to become popular. Processors were more complex than memories, and so commanded a higher price. But most importantly, Intel was the near-sole supplier for a standard that was about to proliferate around the world. Although the exit from DRAMs ultimately proved prescient, at the time it meant facing a Wall Street that often misses subtleties in explanations of why a technology company’s revenue is declining.

But if you live long enough, everything repeats itself. Well, repeats itself more like a spiral than a circle. Intel is back in the memory business, but that business has changed enormously. DRAM technology continues to evolve, but increasingly, system makers like to work with flash, which holds data without electric current but is still pretty fast. Flash is critical for mobile devices because it doesn’t use much power, and, with no moving parts, it’s hardier in the field. It plays a big role in the nascent Internet of Things (IoT) as well. In addition, it also figures into dense server architectures that employ tiers of memory and storage for fast, efficient computing.

Times have changed, though. With its now-unrivaled manufacturing scale, Intel is in a position to dictate the economics of the memory business rather than being victimized by them. This story’s unfolding will be fascinating to watch, particularly given that server processors sell in a ratio to endpoint processors somewhat similar to that of processors to memory chips; that is, one server can serve about 20 endpoints. This balance implies that Intel’s 14 manufacturing facilities will have plenty of room to make all the memories it wants.

In fact, Intel’s strong growth in servers, mirrored by a sharp decline in mobile and stationary endpoints, points to far fewer gâteaux being baked in its plants overall. Even a memory business growing like a weed might not be enough to fill those factories, which must be kept running at 90 percent capacity or greater to make money. It’s quite possible, then, that Intel will increasingly take on a foundry role in the industry, making chips that others have designed.

At the moment, the foundry business belongs to Taiwan Semiconductor Manufacturing Corp. (TSMC), Samsung, and Globalfoundries. Important customers for these services include Qualcomm, Apple, nVidia, and AMD. Like all things coming full spiral, there’s a yin-yang quality to the story: strengths are also weaknesses. Owning lots of factories is wonderful, but having to keep them filled is a Sisyphean task. Below a certain level of capacity, that great strength becomes a gargantuan weakness.