AMD and SYSmark: Out of Intel’s Shadow

This article is a guest contribution from Rob Enderle, Principal Analyst, The Enderle Group.

This week AMD pulled the plug on their participation with BAPCo’s SYSmark benchmarking project. While there has been a bit of drama related to this with some folks blaming Intel for being too heavy handed and others blame AMD for being too thin skinned, the reality is that both companies are on different paths now and that, as a result, a collaborative common benchmark no longer makes any sense. Let me explain.


SYSmark is a benchmark that was designed to measure PCs largely the way they were in the 1990s, heavily using office applications and as largely standalone work centers. Back then graphics were largely reserved for gamers and SYSmark was about business. Intel and AMD were on the same CPU centric path and neither had any real strength in graphics which were added from companies like ATI and NVIDIA after the fact and only on high end systems generally not targeted at business.

This is a world defined by Intel, who remained throughout, vastly larger and better funded than AMD and AMD played the role, albeit involuntarily, as backup vendor to Intel. Still it was a good business until the two companies stopped being socket compatible and that one move changed the impression that it was very difficult for AMD to beat Intel to impossible.

AMD was simply overmatched.

Changing the Game

So AMD changed the game; realizing that both companies were very weak when it came to graphics they took a huge gamble and bought ATI who had been struggling against NVIDIA but was better matched to that company than AMD was to Intel. This move set AMD back a bit against Intel as they integrated the two technologies but they ended up with a dramatically different part, called Fusion, which is a hybrid of the technology they had and the technology they bought.

This part was focused on where applications seemed to be going, to something we were calling GPU computing, and as a hybrid it was designed to bridge between the past and future. In short AMD no longer agreed with Intel with regard to how people were going to use their PCs and this put SYSmark at risk.

The Death of SYSmark

You see a collaborative benchmark is only good if the two companies providing the technology can agree on how to measure it. Once they disagree the benchmark is done. Just like you wouldn’t benchmark a sports car to a truck, if the two products are fundamentally different it makes no real sense to use a common benchmark against them. In fact, people weren’t using their PCs in the same way there were in the 90s anyway. Few are using spreadsheets anymore or local databases as these have given way to hosted and cloud based remote applications. Movies are being streamed and increasingly applications are calling on the graphics side of the PC to render, transcode, or even run highly parallel new applications.

SYSmark needed to go through a dramatic change anyway but Intel and AMD, being on different paths, no longer could agree on what that change was and that disagreement killed it.

Wrapping Up: The End of an Era

We’ve really reached the end of another personal computer era; the web, cloud services, GPU computing, and a huge shift in focus to hardware that is better connected, lighter and has longer battery life has forever changed the world that was into the world that will be. The death of SYSmark is no one’s fault, Intel isn’t being evil and AMD isn’t being unreasonable. The firms changed, the market changed, and a common benchmark between the vendors simply made no more sense.

Intel vs. ARM – The Battle Is Just Beginning

This is a guest contribution from Jack Gold the founder and principal analyst at J.Gold Associates, LLC an information technology analyst firm based in Northborough, MA, covering the many aspects of business and consumer computing and emerging technologies. Learn more about J.Gold Associates here.

The market seems to think that that the folks at ARM and its licensees (TI, Nvidia, Qualcomm, Marvel, Apple, et. al.) are on the verge of attacking Intel where it is most susceptible – the PC and server space. Indeed, ARM is making inroads with low power designs, and has a virtual monopoly on mobile devices. But the path to PCs and Servers is a very different path than smartphones and tablets. And clearly, Intel doesn’t think it can afford to concede any territory, which is why it is pushing back hard on the mobile “heartland” of ARM. So let’s step back and see what Intel has going for it vs. the ARM ecosystem.

Many observers have a bias towards ARM and are discounting Atom’s potential for success in phones and tablets, I think that Atom really does have a chance to succeed and thrive. Not perhaps in the current version, but in the next generation of chips Intel will launch in the next 6-12 months. And I believe that Intel will stay very far ahead of ARM in the race for PCs and even high end tablets. Why? Here are some reasons.

First, Intel’s huge investment in processing technology is not putting it at a disadvantage as some have suggested. Actually it’s the other way around. The ARM camp is relying on the foundries to make the process improvement investment for them. But after they’ve matched Intel’s recent investment of $15B or so, they still will have to recoup that investment, and that will mean higher chip costs to the fabless chip vendors (no free lunch here). At the end of the day, process advantage does matter. It’s how Moore’s Law has remained in play, and process advantage means higher performing chips at lower power and eventually lower cost (as yields increase). And Intel’s recent development of 22nm and 3-D transistors means its lead is increasing and has a two year (or more) advantage on the competition.

Second, the conversation comparing ARM to Intel usually turns to RISC vs. CISC. I thought we settled that argument years ago with Transmeta and MIPS before that. But I guess not. The bottom line is that with more complex systems that have increasingly complex computing requirements, longer and more complex instruction sets improve performance. This is what Microsoft
found out years ago when it suspended development of Windows on RISC. Yes, they now say they will have Windows 8 running on ARM. But the question remains, what version and what features? There is no doubt in my mind that the highest end and more performance oriented versions of Windows will remain focused on the x86 architectures. And don’t forget that ARM isn’t even on 64 bits yet. Imagine a server with a large database running on a 32 bit RISC architecture compared to a full featured 64 bit CISC version. So as functions get more complex, specialized instructions and HW additions give x86 (including Atom) an advantage unless ARM adds the same HW and SW extensions.

The third issue is compatibility. There is a perception that ARM is compatible across platforms and vendors, and clearly its not. As a result, look at the upgrade problem being faced by older devices in the market, and even among devices from the same manufacturer. In fact, different licensed versions of the ARM architecture have incompatibilities. And deep licensees (e.g., Qualcomm) are building their own architecture that is supposed to be compatible with other vendors’ chips (but is it?). ARM fragmentation is an issue usually not discussed. But it is real no less.

Finally many think that Intel is a chip company, and forget that it has tens of thousands of SW engineers on staff. This allows it to create the best compilers in the industry, and to optimize ports to its platform well beyond what others can do. And WindRiver gives Intel incredible breath in tools and designs. Many perceive Google’s commitment to Android on Atom as lukewarm, but Intel has invested considerable resources to port and optimize Android for its platforms, albeit a bit late. And even though current WP7 doesn’t run on Atom, it is quite likely that WP Next (e.g. Windows 8) could easily do so if there is OEM demand, which there well may be especially in the tablet space. Finally, now that Intel has McAfee in its stable, it is very likely to create industry-leading HW-enabled security features that users will find appealing and competitors will have trouble duplicating.

Of course MeeGo remains a sore spot for Intel, especially after Nokia’s rejection of the OS for its devices. It is not clear MeeGo will ever get out of the niche markets it now is targeting. But clearly some vendors see it as an alternative to Android’s (and Apple’s) hegemony, especially in emerging markets. So while it may never achieve the huge numbers of units that its competitors will ship, it will nevertheless have a credible niche to exploit. But Intel is not riding MeeGo as its only path to success.

Bottom Line: For Intel, it’s about the ecosystem. As code gets more complex, it’s increasingly difficult to produce and manage, especially across multiple platforms. This is problematic for both OS developers and ISVs who want to port their apps (look at how many versions of Android apps there are, and not just for OS versions numbers, but also for different devices from different manufacturers). Intel’s x86 consistency is a strong point and fragmentation plays to its strength. Certainly I’m not signally the death knell for ARM. But those who minimize Atom’s future potential are making a mistake.