Intel pushes FPGAs for mainstream enterprise acceleration

Though NVIDIA gets most of the attention for accelerating the movement to more advanced processing technologies with its massive drive of GPU hardware into servers for all kinds of general compute purposes, and rightfully so, Intel has a couple of pots in the fire as well. While we are still waiting to see what Raja Koduri and the graphics team can do on the GPU side itself, Intel has another angle to improve efficiency and performance in the data center.

Intel’s re-entry to the world of accelerators comes on the heels of a failed attempt at bridging the gap with a twist on its x86 architecture design, initially called Larrabee. Intel first announced and showed this technology that combined dozens of small x86 cores in a single chip during an IDF under the pretense of it a discrete graphics solution. That well dried up quickly though as the engineers realized it couldn’t keep up with the likes of NVIDIA and AMD in graphics rendering. Larrabee eventually became a discrete co-processor called Knights Landing, shipping in 2015 but killed off in 2017 due to lack of customer demand.

Also in 2015 Intel purchased Altera for just over $16 billion, one of the largest makers of FPGAs (field programmable gate arrays). These chips are unique in that they can be reprogrammed and adjusted as workloads and algorithms shift, allowing enterprises to have an equivalent to custom architecture processors on hand as they need them. Xilinx is the other major player in this field, and now that Intel has gobbled up Altera, must face down the blue-chip-giant in a new battle.

Intel’s purchase decision made a lot of sense, even at the time, but it’s showing the fruits of that labor now. As NVIDIA has proven, more and more workloads are being shifted from general compute processors like the Xeon family and are being moved to efficient and powerful secondary compute models. The GPU is the most obvious solution today, but FPGAs are another; and one that is growing substantially in the move to machine learning and artificial intelligence.

Though initially shipping as a combination Xeon processor and FPGA die on a single package, Intel is now offering to customers Programmable Acceleration Cards (PACs) that feature the Intel Arria 10 GX FPGA as an add-in option for servers. These are half-height, half-length PCI Express add-in cards that feature a PCIe 3.0 x8 interface, 8GB of DDR4 memory, and 128MB of flash for storage. They operate inside a 60 watt envelope, well below the Xeon CPUs and NVIDIA GPUs they are looking to supplant.

Intel has spent a lot of time and money developing the necessary software stack for this platform as well, called the Acceleration Stack for Intel Xeon Scalable processors with FPGAs. It provides acceleration libraries, frameworks, SDKs, and the Open Programmable Acceleration Engine (OPAE), all of which attempts to lower the barrier of entry for developers to bring work to the FPGA field. One of Intel’s biggest strengths over the last 30 years has been its focus on developers and enabling them to code and produce on its hardware effectively – I have little doubt Intel will be class-leading for its Altera line.

Adoption of the accelerators should pick up with the news that Dell EMC and Fujitsu are selling servers that integrate the FPGAs for the mainstream market. Gaining traction with the top-tier OEMs like Dell EMC means awareness of the technology will increase quickly and adoption, if the Intel software tools do their job, should be spike. The Dell PowerEdge R740 and R740XD will be able to support up to four FPGAs while the R640 will support a single add-in card.

Though specific performance claims are light mainly due to the specific nature of each FPGA implementation and the customer that is using and coding for it, Intel has stated that tests with the Arria 10 GX FPGA can see a 2x improvement in options trading performance, 3x better storage compression, and 20x faster real-time data analytics. One software partner, Levyx, that provides high-performance data processing software for big data, built an FPGA-powered system that achieved “an eight-fold improvement in algorithm execution and twice the speed in options calculation compared to traditional Spark implementations.”

These are incredible numbers, though Intel has a long way to go before adoption of this and future FPGA technologies can rival what NVIDIA has done for the data center. There is large opportunity in the areas of AI, genomics, security, and more. Intel hasn’t demonstrated a sterling record with new market infiltration in recent years but thanks to the experience and expertise that the Altera team brings with that 2015 acquisition, Intel appears to be on the right track to give Xilinx a run for its money.

NVIDIA GTC proves investment in developers can pay dividends

Last week NVIDIA hosted its annual GPU Technology Conference in San Jose and I attended the event to learn about what technologies and innovations the company was planning for 2018 and beyond. NVIDIA outlined its advancements in many of its growing markets including artificial intelligence, machine learning, and autonomous driving. We even saw new announcements around NVIDIA-powered robotics platforms and development capabilities. Though we were missing information on new GPU architectures or products aimed at the gaming community, there was a lot of news to take in from the show.

I came away from the week impressed with the execution of NVIDIA and its engineering teams as well as the executive leadership that I got to speak with. CEO Jensen Huang was as energetic and lively as I have ever seen him on stage and he maintained that during analyst and media briefings, mixing humor, excitement, and pride in equal doses. It is one of the areas of impact that a show like GTC can have that doesn’t make it to headlines or press releases but for the audience that the show caters to, it’s critical.

The NVIDIA GTC site definitely states it goal upfront.

GTC remains one of the last standing developer focused conferences from a major technology hardware company. Though NVIDIA will tell you (as it did me) that it considers itself as much a software company as chip company, the fact is that NVIDIA has the ability to leverage its software expertise because of the technological advantages its current hardware lineup provides. While events like Facebook F8, Cisco DevNet, and Microsoft BUILD continue to be showcases for those organizations, hardware developer conferences have dwindled. Intel no longer holds its Intel Developer Forum, AMD has had no developer focused show for several years, and giants like Qualcomm and Broadcom are lacking as well.

GTC is has grown into a significant force of change for NVIDIA. Over the 10+ years of its existence, attendance has increased by more than 10x from initial numbers. The 2018 iteration included more than 8,500 attendees that included developers, researchers, startups, high-level executives from numerous companies, and a healthy dose of media.

NVIDIA utilizes GTC to reach the audience of people that are truly developing the future. Software developers are a crucial piece and the ability to instill them with information about tool sets, SDKs, and best practices turns into better applications and more usage models applied to GPU technology. The educational segment is impressive to see in person, even after many years of attendance. I find myself wandering through the rows and rows of poster boards describing projects that include everything from medical diagnosis advancements to better utilization of memory for ray tracing, all of course built on GPU processing. It’s a reminder that there are real problems to solve and that much of the work is still done by these small groups of students, not by billion-dollar companies.

Of course, there is a benefit to NVIDIA. The more familiar these developers and researchers are with the technology and tools it provides, both in hardware and software, the better the long-term future for NVIDIA in the space. Technology leaders know that leading in technology itself is only part of the equation. You need to convince the right people that your better product is indeed better and provide the proof to back it up. Getting traction with development groups and fostering them with guides and information during the early stages of technological shifts is what helped created CUDA and cement it as the GPU compute language of choice for the better part of a decade.

NVIDIA wants the same to occur for machine learning and AI.

The GPU Technology Conference is the public facing outreach program that NVIDIA spends a tremendous amount of money hosting. The beginnings of the show were bare and had equal parts gaming and compute, but the growth and redirection to focus on it as a professional development event prove that it has paid dividends for the company. Just look at the dominance that NVIDIA has in the AI and ML spaces that it was previously a non-contender in; that is owed at least in part to the emphasis and money pumped into an event that produces great PR and great ideas.

As for other developer events, the cupboard is getting bare for hardware companies. Intel cancelled the Intel Developer Forum a couple of years back. In hindsight, this looks like an act of hubris, that Intel believed it was big and important enough that it no longer needed to covet developers and convince them to use its tech.

Now that Intel is attempting to regain a leadership position in these growing markets that companies like NVIDIA and Google have staked ground in, such as autonomous driving, artificial intelligence, and 5G, the company would absolutely benefit from a return of IDF. Whether or not the leadership at Intel recognizes the value that the event holds to developers (and media/analyst groups) has yet to be seen. And more importantly, does that leadership understand the value it can and should provide to Intel’s growing product groups?

There are times when companies spend money on events and marketing for frivolous and unnecessary reasons. But proving to the market (both of developers and Wall Street) that you are serious about a technology space is not one of them. NVIDIA GTC proves that you can accomplish a lot of good with this and I think the success that it has seen in areas of machine learning prove its value. What started out as an event that many thought NVIDIA created as hubris has turned into one the best outward signs of being able to predict and create the future.

NVIDIA DGX-2 solidifies leadership in AI development

During the opening two-plus-hour keynote to NVIDIA’s GPU Technology Conference in San Jose this week, CEO Jensen Huang made announcements and proclamations on everything from autonomous driving to medical imaging to ray tracing. The breadth of coverage is substantial now for the company, a dramatic shift from roots in graphics and gaming solely. These kinds of events underlie the value that NVIDIA has created as a company, both for itself and the industry.

In that series of announcements Huang launched a $399,000 server. Yes, you read that right – a machine with a $400k price tag. The hardware is aimed at the highest end, most demanding AI applications on the planet, combining the best of NVIDIA’s hardware stack with its years of software expertise. Likely the biggest customer for these systems will be NVIDIA itself as the company continues to upgrade and improve its deep learning systems to aid in development of self-driving cars, robotics, and more.

The NVIDIA DGX-2 makes the claim of being the world’s first 2 petaFLOPS system, generating more compute power than any competing server in a similar size and density.

The DGX-2 is powered by 16 discrete V100 graphics chips based on the Volta architecture. These sixteen GPUs have a total of 512GB of HBM2 memory (now 32GB per card rather than 16GB) and an aggregate bandwidth of 14.4 TB/s. Each GPU offers 5,120 CUDA cores for a total of 81,920 in the system. The Tensor cores that make up much of the AI capability of the design breach the 2.0 PFLOPS mark. This is a massive compilation of computing hardware.

The previous DGX-1 V100 system, launched just 6 months ago, ran on 8 GPUs with half the memory per GPU. Part of the magic that makes the DGX-2 possible is the development of NVSwitch, a new interconnect architecture that allows NVIDIA to scale its AI integrations further. The physical switch itself is built on 12nm process technology from TSMC and encompasses 2 billion transistors all on its own. It offers 2.4 TB/s of bandwidth.

As PCI Express became a bottleneck for multi-GPU systems that are crunching on enormous data sets typical of deep learning applications, NVIDIA worked on NVLink. First released with the Pascal GPU design and used with Volta as well, the V100 chip has support for 6 NVLink connections and a total of 300 GB/s of bandwidth for cross-GPU communication.

NVSwitch builds on NVLink as an on-node design and allows for any two pairs of GPUs to communicate at full NVLink speed. This facilitates the next level of scaling, moving behind the number of NVLink connections on a per GPU basis and allows for a network to be built around the interface. The switch itself has 18 links and is capable of eight 25 Gbps bi-directional connections. Though the DGX-2 is using twelve NVSwitch chips for connecting 16 GPUs, NVIDIA tells me that there is no technological reason they couldn’t push beyond that. There is simply a question of need and physical capability.

With the DGX-2 system in place, NVIDIA claims to see as much as a 10x speedup in just the 6 months since the release of DGX-1, on select workloads like training FAIRSEQ. Compared to traditional data center servers using Xeon processors, Huang stated that the DGX-2 can provide computing capability at 1/8 the cost, 1/60 the physical space, and 1/18 the power. Though the repeated line of “the more you spend, the more you save” might seem cliché, NVIDIA hopes that those organizations investing in AI applications see value and adopt.

One oddity in the announcement of the DGX-2 was Huang’s claim that it represented the “world’s largest GPU”. The argument likely stems from Google’s branding of the “TPU” as a collection of processors, platforms, and infrastructure into a singular device and NVIDIA’s desire to show similar impact. The company may feel that a “GPU” is too generic a term for the complex systems it builds, which I would agree with, but I don’t think co-opting a term that has significant value in many other spaces is the right direction.

In addition to the GPUs, the DGX-2 does includes substantial hardware from other vendors that act as support systems. This includes a pair of Intel Xeon Platinum processors, 1.5 TB of system memory, eight 100 GigE network connections, and 30TB of NVMe storage. This is an incredibly powerful rackmount server that services AI workloads at unprecedented levels.

The answer that I am still searching for is for the simple question of “who buys these?” NVIDIA clearly has its own need for high performance AI compute capability, and the need to simplify and compress that capability to save money on server infrastructure is substantial. NVIDIA is one of the leading developers of artificial intelligence for autonomous driving, robotics training, algorithm and container set optimization, etc. But other clients are buying in – organizations like New York University, Massachusetts General Hospital, and UC Berkeley have been using the first-generation device in flagship, leadership development roles. I expect that will be the case for the DGX-2 sales targets; that small group on the bleeding edge of AI development.

Announcing a $400k AI accelerator may not have a direct effect on many of NVIDIA’s customers, but it clearly solidifies the company’s position of leadership and internal drive to maintain it. With added pressure from Intel, which is pushing hard into the AI and machine learning fields with acquisitions and internal development, NVIDIA needs to continue down its path and progression. If GTC has shown me anything this week, it’s that NVIDIA is doing just that.

AMD Security Concerns Overshadowed by Circumstances

On Tuesday, a security research firm called CTS Labs released information regarding 13 security vulnerabilities that impact modern AMD processors in the Ryzen and EPYC families. CTS launched a website, a couple of explanatory videos, and a white paper detailing the collection of security issues, though without details of implementation (which is good).

On the surface, these potential exploits are a serious concern for both AMD and its customers and clients. With the recent tidal wave caused by the Spectre and Meltdown security vulnerabilities at the beginning of the year, which have led to some serious talk of hardware changes and legal fallout like lawsuits against chip giant Intel, these types of claims are taken more seriously than ever before. That isn’t by itself a negative for consumers – putting more emphasis of security and culpability on the technology companies will result in positive changes.

CTS Labs has four different categories of the vulnerabilities that go by the name Ryzenfall, Fallout, Masterkey, and Chimera. The first three affect the processor itself and the secure processor embedded in it while the last one (Chimera) affects the chipset used on Ryzen motherboards. The heart of the exploit on the processor centers on an ability to overwrite the firmware of the “Secure Processor”, a dedicated Arm Cortex A5 part that runs a separate OS. Its job is to handle security tasks like password management. Being able to take control of this part has serious implications for essentially all areas of the platform, from secure memory access to Windows secure storage locations.

Source: CTS Labs

The Chimera vulnerability stems from a years-old exploit in a portion of the ASMedia designed chipset that supports Ryzen processors, allowing for potential man-in-the-middle attacks to access network and storage traffic.

In all of these cases, the exploits require the attacker to have physical access to the system (to flash a BIOS) or elevated, root privileges. While not a difficult scenario to setup, it does put these security issues into a secondary class of risk. If you have a pre-compromised system, then there are a significant number of exploits that all systems are at risk of.

It is interesting to note from a technical standpoint that all of the vulnerabilities center around the integration of the Secure Processor, not the fundamental architecture of the Zen design. It is a nuanced difference, but one that separates this from the Spectre/Meltdown category. If these concerns are valid, its possible that AMD could somewhat easily swap out this secure processor design for another, or remove it completely for some product lines, without touching the base architecture of the CPU.

For its part, AMD has been attentive to the new security claims. The company was given less than 24 hours notice of the security vulnerabilities, a significant alteration to common security research practices. For Spectre/Meltdown, Intel and the industry were given 30-90 days notice, giving them time to do research and develop a plan to address it. CTS Labs claims that the quick release of its information was to keep the public informed. Without the time to do validation, AMD is still unable to confirm the vulnerabilities, as of this writing.

CTS is holding back details of implementation for the vulnerability from the public, which is common practice until the vendor is able to provide a fix.

There is more to this controversy, unfortunately, than simply the potential security vulnerabilities. CTS Labs also talked with other select groups prior to its public data release. The research entity pre-briefed some media outlets, which is not entirely uncommon. Secondary security researchers were given access to the POCs (proof on concepts) to validate the vulnerabilities. Again, that is fairly expected.

But CTS also discussed the security issues with a company called Viceroy Research that has been documented in the past as creating dicey financial situations for companies in order to make a short term profit, at least based on accusations. In this case, Viceroy published a paper on the same day of the release of CTS Labs own report calling for AMD to file for bankruptcy and that the stock should have a $0.00 value.

To be frank, the opinions contained in the paper are absurd, and show a clear lack of understanding of the technical concerns surrounding security issues and of the market conditions for high-tech companies. Calling for a total recall of products for what CTS has detailed on AMD’s Ryzen hardware, without understanding the complexity of the more direct hardware-level concerns of Spectre/Meltdown that have been in the news for three months leaves me scratching my head.

Because of this secondary paper and the implications of finances in play regarding the news, it paints the entire CTS Labs report and production in a very bad light. If the security concerns were as grave as the firm claims, and the risk to consumers is real, then they did a disservice to the community by clouding the information with the circus that devoured it.

With all that said, AMD should and appears to be taking the security concerns raised in this report with the level of seriousness it demands. AMD is working against a clock that might be unfair and against industry norms, but from my conversations with AMD personnel, the engineering and security teams are working around the clock to get this right. With the raised level of scrutiny around chip security after the Meltdown and Spectre release, no company can take the risk of leaving security behind.

Windows ML Standardizes Machine Learning and AI for Developers

During its Windows Developer Day this week, Microsoft took the covers off of its plans to help accelerate and dominate in the world of machine learning. Windows ML is a new API that Microsoft will be including in the RS4 release of Windows 10 this year, enabling a new class of developer to access the power and capability of machine learning for their software.

Microsoft already uses machine learning and AI in Windows 10 and on its Azure cloud infrastructure. This ranges from analyzing live camera feeds to AI for game engines and even indexing for search functionality on your local machine. Cortana is the most explicit and public example of what Microsoft has built today, with the Photo-app based facial recognition and image classification being a close second.

Windows ML allows software developers to utilize pre-trained machine learning models to power new experiences and classifications of apps. The API allows for simple integration with existing Microsoft development tools like Visual Studio. Windows ML supports direct importing of ONNX (Open Neural Network Exchange) formatted files that represent deep learning compute models, allowing for easy transferal and sharing between application environments. This format was introduced by Microsoft and Facebook back in September of last year. Frameworks like Caffe2, PyTorch, and Cognitive support ONNX export, so models that are trained in them can utilize inference through any system that integrates ONNX.

To be clear, Windows ML isn’t intended to replace the training activity that you would run on larger, high-performance server clusters. Microsoft still touts its Azure Cloud infrastructure for that, but it does see benefits to pairing that with the Windows ML enabled software ecosystem on edge devices. Software that wants to support updating training models with end-user input can do so with significantly less bandwidth required, as only the much smaller, pre-defined Windows ML result would need to be returned.

With Windows ML, an entire new class of developer will be able to utilize machine learning and AI systems to improve the consumer experience. We will see spikes in AI-driven applications for image recognition, automated text generation, gaming, motion tracking, and so much more. There is a huge potential to be fulfilled by simply getting the power of machine learning into the hands of as many software developers as possible, and no one can offer that better than Microsoft.

Maybe the most exciting part about Windows ML to me is the support for hardware acceleration. The API will be able to run on CPUs, GPUs, and even newer AI-specific add-in hardware like the upcoming Intel Movidius chip. Using DirectX 12 hardware acceleration and DX12 compute capabilities that were expanded with Windows 10, Microsoft will allow application developers to write applications that don’t need to worry about code changes for the underlying hardware in a system to ensure compatibility. While performance will obviously scale from processor to processor, as will user experiences based on that, Windows ML aims to create the same kind of API layer advantages for machine learning as DirectX has done for gaming and graphics.

Microsoft not only will support discrete graphics solutions but also integrated graphics from Intel (and AMD I assume). Windows ML will be one of the first major users of Intel’s AVX-512 capabilities (vector extensions added to consumer hardware with Skylake-X) and Movidius dedicated AI processor. Qualcomm will also support the new API on its upcoming Always Connected PCs using the Snapdragon 835 platform, possibly opening us up to the first use case for the company’s dedicated on-chip AI Engine.

This new API will be supported with both Windows UWP apps (Windows Store) and Win32 apps (classic desktop apps).

It is still in the early phases of development when it comes to the true AI-driven future of computing. Microsoft has been a player in the consumer market with the Cortana integration on Windows, but it has seen limited success compared to the popularity of Google, Amazon, and even Apple systems. By enabling every Windows application developer to take advantage of machine learning with Windows ML, Microsoft will see significant movement in the space, much of it likely using its Azure cloud systems for training and management. And for consumers, the age of artificial intelligence ubiquity looks closer than ever.

AMD gains big in GPU market share last quarter thanks to cryptocurrency

The latest reports for the graphics industry were released this week and paint an interesting picture on the market. Not only was there an annual increase in GPU shipments nearing 10%, but the big winner looks to be AMD, with a noticeable swing in market share away from NVIDIA. Obviously this comes at a controversial time in the discrete graphics market with cryptocurrency mining affecting both availability and pricing of hardware, but the numbers are telling of a market in flux.

As PC gamers and cryptocurrency miners continue to battle for the same pool of graphics hardware, both NVIDIA and AMD benefit from the competition in the market place. Sales are clearly higher than would exist if only PC gamers were the target customer, and because of that all the add-in card partners have no problem selling whatever the GPU vendors can supply them. But how did that play out in terms of market share shift in 2017?

Based on reports from Jon Peddie Research, some major swings have occurred. Looking at quarter-to-quarter movement from Q3 to Q4 2017, AMD sees an increase from 27.2% to 33.7% in add-in card shipments. That’s a jump of 6.5% in a single quarter; an impressive change. As there are only two competitors in the discrete space, that means NVIDIA saw a 6.5% drop in the same span of time, dropping from 72.8% to 66.3% share.

Source: Jon Peddie Research

Annually, comparing 2017 to 2016, AMD sees an increase from 29.5% to 33.7% market share, a change of 4.2%. NVIDIA sees the inverse with a jump down -4.2%.

Clearly, NVIDIA is still the leader in graphics card sales globally. Though the green giant dropped from 72% to 66%, it maintains a 33-point advantage over the Radeon brand. NVIDIA’s GeForce products continue to provide better power efficiency and arguably better adjacent technologies (drivers, software tools, display tech, etc.) and I believe that gamers prefer it by default (justified or not).

It is also worth noting that NVIDIA tends to have better margins and higher ASPs on average than AMD in the consumer graphics space. So while unit share might have tilted in favor of AMD by 6.5% this past quarter, it is reasonable to assume that revenue share has moved less than that.

JPR showed a seasonal drop from Q3 to Q4 in total graphics shipments of -4.6%. This is close to the expected result from more than a decade of market tracking, though 2016 was an outlier with a Q4 increase. Returning to seasonality trends may indicate the GPU market is becoming more stable, and the battle between gamers and miners might be waning. However, this could also be a result of gamers backing away from the discrete GPU space and deciding to wait for the rumored launches of updated NVIDIA products this spring.

Another interesting note from the JPR data is that the graphics market saw a 9.7% year-on-year increase in shipments, likely indicative of the impact mining-specific sales have had overall.

Peddie claims that add-in card vendors sold more than 3 million units to cryptocurrency miners directly, worth more than $776M in revenue. That averages out to more than $258 per card, much higher than GPUs over the last decade. Increasing the ASP (average selling prices) is great for both AMD and NVIDIA, as it brings up margins, as evident from recent earnings releases by both companies.

With more than 52 million add-in cards sold in total for 2017, cryptocurrency specific sales represent about 6% of the total market. This is slightly lower than my expectations but still is a significant driver.

It would appear that AMD is indeed the primary beneficiary of the mining market influx based on the market share increases in this most recent quarter. It is unlikely that a significant amount of gamers decided to buy AMD graphics hardware (when it was even available) over NVIDIA in that window, so it seems like a safe assumption that majority of that increase is courtesy of cryptocurrency sales.

Source: Jon Peddie Research

This shift in the market also has impacted the distribution of graphics card sales as well, moving unit sales from lower-priced units to higher-priced. JPR defines a “mainstream” graphics card as one sold for less than $150, a “midrange” graphics card as one sold from $150-$249, and a “high-end” graphics card as anything sold over $250. Looking at quarter-to-quarter changes, the high-end market moves from 11.5% to 16.0% and the midrange bumped from 41.5% to 51.7% of total shipments. As a result, the mainstream segment dropped from 39.2% down to 26.1%.

Again, this moves ASPs to areas that benefit both AMD and NVIDIA, improving margins.

The jump that AMD has made based on these reports should not be dismissed and instead paints a very positive picture for the company in the immediate. For short-term profitability, AMD did show much better than expected results for itself in Q4 last year and I expect we will see at least some of that value appear in the Q1 2018 results.

What happens next in the world of graphics is going to be quite an interesting story. There is a ton of uncertainty as we move into the rumored release of another generation of NVIDIA GeForce graphics cards sometime this spring. (AMD doesn’t have anything on the docket for new consumer graphics products for the foreseeable future.) How will NVIDIA modify pricing with this new product family? Keeping in mind that the higher-than-MSRP prices that graphics cards sell at do not directly benefit the GPU provider (instead the add-in card partners or channel sales take that money today), NVIDIA couldn’t be blamed for wanting a part of that additional revenue for itself.

Another theory has been circling that NVIDIA might attempt to find a way to lock-out cryptocurrency mining for a subset of graphics cards in order to alleviate PC gamers’ angst. While this could be technically possible, it’s difficult to thwart a community that is built on profitability. It would represent a tremendous good-will gesture from NVIDIA, but it doesn’t make a lot of fiscal sense for them in the short-term.

Putting that aside, if the next generation of GeForce products improve performance and power efficiency for cryptocurrency mining, and NVIDIA has done a decent job of collecting inventory, there is a reasonable chance that the market share changes we saw take place in Q4 2017 could reverse.

The cryptocurrency space remains volatile and highly unpredictable, and if the market shifts and we return to a world where gaming performance, features, and efficiency are restored as the most important reasons for graphics card purchases, NVIDIA should again have the edge. The gains AMD has seen in the latest JPR information look to be dependent on coin-mining, and without that to lean on, the Radeon products could return to a battle they are not as well equipped for.

Even if blockchain technology is here for good, and it appears that is the case, both companies are hesitant to publicly invest in the field. GPUs have been the dominant computing space for cryptocurrency since its inception, but the encroachment of ASICs and changing algorithms could lessen the value to both NVIDIA and AMD seemingly at any point.

Intel lays out plans for 5G technology future

Intel was one of the early noise makers around the upcoming transition from 4G to 5G cellular technology, joining Qualcomm in promising the revolution would change the way we interact with nearly everything around us. Though not exactly a newcomer to the world of wireless technology with the launch of Wi-Max under its belt, Intel has much to prove in the space of wireless communications.

New information this week surrounds Intel’s partnerships with key PC vendors to bring 5G-enabled notebooks to market in late 2019 and a deal with Chinese chip vendor Spreadtrum to use Intel 5G modems in smartphones in late 2019.

The strategy that Intel been progressing with for several years is holistic in nature, covering everything from processors for the cloud server infrastructure and datacenter to network storage to cellular modems and even device-level smart-device chips. This broad and extensive approach provides Intel with some critical advantages. It can leverage the areas where it is considered a leader already for safer bets while diving into riskier areas like the cellular interface itself (modem) where competition from traditional technology providers like Qualcomm lead.

5G Cloud System Advantage

Intel’s core markets in the spectrum of 5G technology lie with systems that depend on hardware designs that the company is already dominant in. Cloud datacenters, for example, are powered today by Intel servers using its Xeon product family that holds more than a decade of unrivaled leadership. Even the network storage and virtualization segments that connect the cloud systems to the cellular networks favor the Intel architecture and design, with years of software development and enterprise expertise under its belt.

Managers and CTOs are intimately familiar with the capabilities and performance that Intel provides in these spaces, and feel more comfortable adopting the company’s chips for the 5G migration coming in 2018 and 2019.

Edge Computing Creates Growth

Edge computing is a new and growing field for systems and represents the migration of higher performance servers from the centralized datacenter to as near to the consumer as possible. This could materialize as hardware living at the site of each cellular antenna or collections of servers distributed to key locations around the country, addressing large urban populations.

As the movement to smart cities, robotics, and multi-purpose drones grows along with 5G, the need for analytics, off-loaded processing, and data storage to be closer to the edge increases. This data and compute proximity lowers dependency on any single datacenter location and improves performance while reducing latency of the interactions.

These edge compute roll-outs will offer a significant revenue growth opportunity all players, including AMD, but Intel’s leadership in the server space will provide increased potential.

Intel Behind in 5G Modems

Moving the discussion to the cellular networks themselves and the need for a 5G modem in devices like smartphones and PCs, Intel has a very different outlook. Modem technology and analog signaling is a more complex field that most understand, and the lack of experience for Intel’s teams is a significant concern. Qualcomm has publicly stated several times that it believe it holds a 12-24 month lead on any competitor in the 5G modem space.

Intel has a 5G modem called the XMM 8060 and at the 2018 Winter Olympics Intel has been demoing 5G technology through various VR experiences. However, the 5G integration in use there is not on the final chip that the XMM 8060 will ship as but instead is a “concept modem” that is used for trials, diagnostics, and product tuning. All technology vendors use this tactic to gain knowledge about products in the design phase, but it’s rare to see significant public demonstrations using them.

At Mobile World Congress in Barcelona next week, Intel will be showing the world’s first 5G-enabled Windows PC. Hot on the heels of Qualcomm touting the future of connected Windows devices shipping this quarter, Intel is eager to assert its dominance in the PC world and showcase the future of mobile computing. Partnering with Dell, HP, Lenovo, and Microsoft to enable 5G connectivity alongside Intel Core processors, Intel believes the market will see 10-15% attach rates for cellular modems on notebooks over the next 3-5 years. Computers based around this 5G technology aren’t expected to ship until holiday 2019 and the demoed prototype is still using the “concept modem” from Intel rather than final silicon.

The second announcement from MWC next week is Intel’s partnership with Spreadtrum, a Chinese semiconductor company that builds mobile processors for smartphones. As part of a multi-year agreement, Intel will provide a series of XMM 8000-series 5G enabled modems for Spreadtrum to use in conjunction with its own mobile chips beginning, again, in the second half 2019. Though Spreadtrum is a small SoC vendor globally, having any partners announced this far in advance is a positive sign. However, if you compare this to recent Qualcomm announcements that included 18+ device OEMs that will be using its 5G modems this year, Intel is well behind.

The Apple Possibility

There is one possible exception: Apple. Rumors continue to circulate that Apple may be trying to remove Qualcomm modems from its iPhone product family completely in 2019 or 2020, with Intel being the obvious modem replacement. If that holds true, Intel will have an enormous customer account to justify its development costs. Being associated with a company often considered the most advanced in the mobile space has its advantages too.

Apple has indicated that it sees 5G technology as a 2020 growth opportunity, which would allow time for Intel to finalize the XMM 8060 modem. Competing Android devices are expected to ship in late 2018 and ramp in 2019 using Qualcomm 5G modems.

New AMD chip takes on Intel with better graphics

Nearly a full year after the company started revamping its entire processor lineup to catch up with Intel, AMD has finally released a chip that can address one of the largest available markets. Processors with integrated graphics ship in the majority of PCs and notebooks around the world, but the company’s first Ryzen processors released in 2017 did not include graphics technology.

Information from Jon Peddie Research indicates that 267 million units of processors with integrated or embedded graphics technology were shipped in Q3 2017 alone. The new AMD part that goes on sale today in systems and the retail channel allows AMD an attempt to cut into Intel’s significant market leadership in this segment, replacing a nearly 2-year-old product.

Today AMD stands at just 5% market share in the desktop PC space with integrated graphics processors, a number that AMD CEO Lisa Su believes can grow with this newest Ryzen CPU.

Early reviews indicate that the AMD integrated graphics chips are vastly superior to the Intel counterparts when it comes to graphics and gaming workloads and are competitive in standard everyday computing tasks. Testing we ran that was published over at PC Perspective shows that when playing modern games at mainstream resolutions and settings (720p to 1080p depending on the specific title in question), the Ryzen 5 2400G is as much as 3x faster than the Core i5-8400 from Intel when using integrated processor graphics exclusively. This isn’t a minor performance delta and is the difference between having a system that is actually usable for gaming and one that isn’t.

The performance leadership in gaming means AMD processors are more likely to be used in mainstream and small form factor gaming PCs and should grab share in expanding markets.

China and India, both regions that are sensitive to cost, power consumption, and physical system size, will find the AMD Ryzen processor with the updated graphics chip on-board compelling. AMD offers much higher gaming performance using the same power and at a lower price. Intel systems that want to compete with the performance AMD’s new chip offers will need to add a separate graphics card from AMD or NVIDIA, increasing both cost and complexity of the design.

Though Intel is the obvious target of this new product release, NVIDIA and AMD (ironically) could also see impact as sales of low-cost discrete graphics chips won’t be necessary for systems that use the new AMD processor. This will only affect the very bottom of the consumer product stack though, leaving the high-end of the market alone, where NVIDIA enjoys much higher margins and market share.

The GT 1030 from NVIDIA and the Radeon RX 550 from AMD are both faster in gaming than the new Ryzen processor with integrated graphics, but the differences are in an area where consumers in this space are like to see it as a wash. Adding to the story is the fact that the Ryzen processor is cheaper, draws less power, and puts fewer requirements on the rest of the system (lower cost power supply, small chassis).

AMD’s biggest hurdle now might be to overcome the perception of integrated processor graphics and the stigma it has in the market. DIY consumers continue to believe that all integrated graphics is bad, a position made prominent by the lack of upgrades and improvements from Intel over the years. Users are going to need to see proof (from reviews and other users) to buy into the work that AMD has put into this product. Even system integrators and OEMs that often live off the additional profit margin of upgrades to base system builds (of which discrete graphics additions are a big part) will push back on the value that AMD provides.

AMD has built an excellent and unique processor for the mainstream consumer and enterprise markets that places the company in a fight that it been absent from for the last several generations. Success here will be measured not just by channel sales but also how much inroad it can make in the larger consumer and SMB pre-built space. Messaging and marketing the value of having vastly superior processor graphics is the hurdle leadership needs to tackle out the gate.

Ex-Intel President Leads Ampere into Arm Server Race

In a world where semiconductor consolidation is the norm, it’s not often that a new player enters the field. Even fabless semiconductor companies have been the target of mergers and acquisitions (Qualcomm being the most recent and largest example) making the recent emergence of Ampere all the more interesting. Ampere is building a new Arm-based processor and platform architecture to address the hyperscale cloud compute demands of today and the future.

Though the name will be new to most of you, the background and history is not. Owned by the Carlyle Group, which purchased the Applied Micro CPU division from MACOM last year, Ampere has a solid collection of CPU design engineers and has put together a powerful executive leadership team. At the helm as CEO is Renee James, former President at Intel, leaving the company in 2015. She brings a massive amount of experience from the highest level of the world’s largest semiconductor company. Ampere also touts an ex-AMD Fellow, former head of all x86 architecture from Intel, ex-Intel head of platform engineering, and even an ex-Apple semiconductor group lead.

Architecturally, the Ampere platforms are built with a custom core design based on the Arm architecture, utilizing the ARMv8 instruction set. Currently shipping is the 16nm processor codenamed Skylark with 32-cores and a 3.0 GHz or higher clock speed. The platform includes eight DDR4 channels, 42 lanes of PCI Express 3.0, and a TDP of 125 watts. The focus of this design is on memory capacity and throughput, with competitive SPECint performance. In my conversation with James last week, the emphasis on memory and connectivity is a crucial component of targeting lower costs for the cloud infrastructure that demands it.

The second generation of Ampere’s product stack called Quicksilver is coming in mid-2019. It will move to the updated ARMv8.2 instruction set, increase core count, improve overall IPC, and add multi-socket capability. Memory speed will get a bump and connectivity gets moved to PCI Express 4.0. It will include CCIX support as well, an industry-standard cache coherent interface for connecting processors and accelerators from various vendors.

Interestingly, this part will be built on the TSMC 7nm process technology which Ampere CEO James says will have a “fighting chance” to compete or beat the capabilities provided to Intel by its own in-house developed process technology. That isn’t a statement to make lightly and puts in context the potential impact that Intel’s continued 10nm delays might have for the company long-term.

For systems partnership, Ampere is working with Lenovo. This is a strong move by both parties, as Lenovo has a significant OEM and ODM resources, along with worldwide distribution and support. If the Ampere parts do indeed have impact in the cloud server ecosystem, having a partner like Lenovo that is both capable and eager to grow in the space provides a lot of flexibility.

Hardware is one thing but solving the software puzzle around Ampere’s move into the hyperscale cloud server market is equally important. James told me that the team she has put together knows the importance of a strong software support system for enterprise developers and seeing that happen first hand at Intel gives her a distinct advantage. Even though other players like Arm and Qualcomm are already involved in the open source community, Ampere believes that it will be able to make a more significant impact in a shorter period, moving forward support for all Arm-processors in the server space. Migrating the key applications and workloads, like Apache, memcache, Hadoop, and Swift to native, and most importantly efficient, code paths is required for widescale adoption.

Followers of the space may be wondering why now is the right time for a company like Ampere to succeed. We have seen claims and dealt with false promises from numerous other Arm-based server platform providers, including AMD and the source of Ampere’s current team, Applied Micro. Are the processors that much different in 2018 from those that existed in 2013? At their core, no. But it’s the surrounding tentpoles that make it different this time.

“Five years ago, this couldn’t have happened,” said James in our interview. The Arm architecture and instruction set has changed, with a lot more emphasis on the 64-bit superset and expanding the capability for it to address larger and faster pools of memory. Third party foundries have caught up to Intel as well – remember that James believes TSMC’s 7nm node will rival Intel competitively for the first time. Finally, the workloads and demands from the datacenter have changed, moving even further away from the needs of “big cores” and towards the smaller, more power efficient cores Ampere and other Arm options provide.

Obviously, that doesn’t apply to ALL server workloads, but the growth in the market is in that single-socket, memory and connectivity focused segment. AMD backs up Ampere’s belief here, with its own focus on single-socket servers to combat the Intel dominated enterprise space, though EPYC still runs at higher power and performance levels than anything from the Arm ecosystem.

James ended our interview with a comparison of the Arm server options today to x86 servers more than 25 years ago. At the time, the datacenter was dominated by Sun and Sparc hardware, with Sun Microsystems running advertising claiming that Intel’s entry into the space with “toy” processors wasn’t possible. Fast forward to today and Intel has 99% market share in the server market with that fundamental architecture. James believes that same trajectory lies before the Arm-based counterparts rising today, including Ampere.

There is still a tremendous mountain to climb for both Ampere and the rest of the Arm ecosystem, and to be blunt, there is nothing that proves to me that any one company is committed completely. Qualcomm has announced its Centriq CPUs last year and Ampere claims to have started sampling in 2017 as well. We don’t yet have one single confirmed customer that has deployed Arm-based systems in a datacenter. Until that happens, and we see momentum pick up, Ampere remains in the position that previous and current Arm-based servers are found: behind.

AMD CEO Refocused Company on Growth in Graphics, High-Performance Compute

CEO Lisa Su believes that 2017 was “a year of inflection” for the company and one that will bridge between two different versions of AMD. Prior, AMD was a company in flux, with a seemingly unfocused direction that affected its ability to compete with Intel and NVIDIA. Moving from an organization that wanted to address every facet of the market to one that realigned toward high-performance computing and graphics, Su has created a more efficient and more targeted company. From 2018 onward, Su has AMD on track to grow in well-established segments including processors, graphics, and enterprise, but also in expanding markets like blockchain and cryptocurrency, machine learning, and cloud computing.

The just announced Q4 AMD financials are out and paint a positive picture of growth for the once struggling chip company. Revenue spiked 34% to $1.5B this quarter and margins for 2017 were up 3% YoY, reaching 35%. Last year as a whole saw a 25% jump in revenue for AMD, a total of $1B of additional income, bringing the company to a full year profit for the time in recent memory.

The Now

By far the most head-turning result last quarter from AMD was its growth in the compute and graphics segment. Covering both its processor and graphics divisions, Q4 saw a 60% jump in year-over-year revenue, attributed to the position of its consumer Ryzen processors and Radeon graphics solutions. 2017 saw the release of the Ryzen family of parts for mainstream and enthusiast buyers, enterprise systems, workstations, and even notebook and 2-in-1 PCs.

This group saw $140M in quarter-to-quarter revenue increase, of which roughly one-third is attributed to sales of graphics chips for blockchain processing, or cryptocurrency mining as it is most often referred to. (Blockchain is the underlying technology behind cryptocurrencies.) That means around $46M of the revenue growth in the fourth quarter is attributed to the sales of graphics cards into the cryptocurrency market, with the remainder coming from the combination of graphics chip sales for gaming, professional cards, and Ryzen processors. In total, more than 50% of the total revenue growth for this segment of AMD is coming from its graphics products.

Accurately measuring the impact of cryptocurrency-based graphics chip sales can be difficult, as they are sold through the same partners and channels as gaming hardware. There are pros and cons to being a significant player in the coin-mining markets, from the surge in sales and revenue to the potential saturation of the graphics market should there be a steep decline in blockchain demand.

The Ryzen processor family was up in both units and revenue for the quarter, but AMD didn’t break down in any more detail what that looked like. ASPs (average selling prices) are up on the year but flat for the quarter as a result of the introduction of the Ryzen 3 family targeting the budget OEM market. Selling additional lower-priced hardware will generally bring down ASP and margin, but the size of the addressable market is larger in this price segment than any other consumer space.

AMD’s EESC group (enterprise, embedded, and semi-custom) was up only 3% year-on-year, a surprise considering the continued success of the game consoles that AMD hardware powers (Microsoft Xbox One and Sony PlayStation 4), and the release of EPYC processors for server and cloud infrastructure. Though the media reception has been strong for the performance and capability of the EPYC processor family, ramping sales in the server market is greatly dependent on refresh cycles and availability from vendors like HP and Dell.

The Future

AMD CEO Lisa Su also stated in the earnings call that it would be attempting to “ramp up” graphics chip production in order to meet the demand for graphics cards in the market for both gaming and blockchain/cryptocurrency. The problem for AMD (and NVIDIA as well) is that the bottleneck of production doesn’t lie with the vendors that build the chips (groups like TSMC, Samsung, and GlobalFoundries that make the chips for fabless semiconductor companies like AMD, NVIDIA and Qualcomm) but apparently with the memory ecosystem. The prices of memory have increased drastically over the last year as the demand for all variants has increased without movement in memory production capability.

As a result, even if AMD could or wanted to significantly increase its graphics chip production for its current family of parts, shortages in the memory space could hold back inventory increases to a large degree. There is a limit to how much impact any “ramp up” that AMD might move forward with on how its growth in the graphics markets is affected.

In some ways this might be a happy accident for AMD as it means limited risk of inventory concerns in the future. One of the biggest fears the cryptocurrency market has created is that a crash of the market will mean significant inventory pushed back into the resale market for consumers, stunting any future product releases or sales of product currently being produced.

While the cryptocurrency market will remain fluid and speculation will continue to drive price swings for the foreseeable future, blockchain technology itself has proven to be a viable solution to many computing and security problems. Previous Bitcoin crashes were dependent on only a single virtual currency but today we have seen the expansion to dozens of currency options and additional use cases for the underlying technology. AMD believes that the demand for blockchain processing on graphics processors will remain strong through the first quarter of 2018 and will be a continued source of revenue and sales well into the future.

Looking at the processor market, AMD’s move into the mobile processor segment comes with the most competitive product it has fielded in over a decade. By combining the technology of both a high-performance graphics chip and traditional processor in a single package, AMD can offer a solution that is unique from anything Intel currently sells.

We only saw a moderate growth rate in the enterprise segment for AMD with its EPYC processor but I expect that to increase through 2018 as more partners like Baidu and Microsoft integrate and upgrade systems in datacenters across the world. This space traditionally takes much longer than consumer areas to migrate to newer technology and AMD still believes it has a strong position of growth.

Intel, HP, and Dell Should Target Kaby Lake-G to Gamers

During CES earlier this month, Intel took the wraps off of its 8th Generation Core Processor with Radeon RX Vega M Graphics product, an innovative processor that combines the power of a quad-core Intel CPU and a discrete GPU from AMD on a single package. The technology behind the product is nearly as impressive as the business decision to make it happen, but unfortunately it looks like Intel and its partners might be misusing the potential that the processor codenamed Kaby Lake-G provides.

Under the hood, the Kaby Lake-G processor is a marvel. By combining the leading performance processor in the mobile space with a discrete-level graphics solution on the same package, Intel is able to build a part unrivaled in the consumer space. The graphics chip comes from AMD and is a semi-custom design based on the Vega architecture. It is physically connected to the processor via 8 lanes of PCI Express through the silicon interposer. The GPU has a single stack of 8GB of HBM2 memory (the same used on AMD’s latest Radeon RX Vega series of graphics cards) connected through Intel’s new EMIB (embedded multi-die interconnect bridge). This EMIB allows for a high bandwidth interface between the two chips (required for high-bandwidth memory) while allow for low z-height package design (critical for mobile form factors aiming to thinner).

Seeing Intel purchase a part directly from AMD in the form of a semi-custom graphics chip to integrate on its own processor design, in addition to depending on AMD to provide the necessary driver support for Intel to repackage and distribute, is still a shocking revelation. This gives Intel its first truly high-performance processor design with CPU and GPU strengths, though it does indicate where Intel believes it stands in today’s market in regards to graphics performance.

But what has surprised me the most about this new processor is Intel’s decision to not market this product to gamers.

Kaby Lake-G will be available in two flavors: a 100-watt version and a 65-watt version. Claims from Intel put the 100-watt version on par with the graphics performance of an NVIDIA GeForce GTX 1060 and the 65-watt version at a GTX 1050. These are impressive claims and should give any notebook vendor an incredible opportunity to build a system with gaming-class performance but without the need for two dedicated, discrete chips. This should allow for smaller PCB design, larger batteries, thinner form factors, etc.

At CES, Dell and HP showed new notebooks integrating the Kaby Lake-G processor. The Dell XPS 15 2-in-1 and the HP Spectre x360 15 are both impressive systems in their own right, combining solid design with thin form factors and great feature sets. However, when I talked with both parties about the planned market for the machines, they had the same answer: content creation users. When I asked about gamers, both companies seemed disinterested in the idea, saying these products instead “could be used for gaming.”

To be clear, these designs and the Intel Kaby Lake-G processor will indeed be excellent for content creation, especially any users that take advantage of software that can utilize the GPU. Adobe Premiere, rendering tools, CAD, etc. will all see improvements thanks to the addition of the discrete-level graphics on the new part. But there are two big reasons why Intel and its partners should be leaning more heavily into the gaming angle.

First, the timing is perfect. With the increase in prices for gaming graphics cards, and system memory, causing many DIY users to put off or avoid a new system purchase or upgrade thanks to the shortages caused by cryptocurrency mining, any alternative solution today will get a lot of attention. I have already explored how the current market is pushing consumers toward OEM PCs and gaming notebooks, and a sleek, compact, and innovative design based on Kaby Lake-G would be an excellent choice. Gamers that have been beaten up on pricing because of coin mining will be looking at pre-built machines, and gaming notebooks, more than any time in the history of gaming notebooks. Intel’s latest processor combination can be implemented in capable systems that are as gorgeous as they are performant.

Our second reason centers on what has become Intel’s enemy #1: NVIDIA. There is no denying the envy that exists from Intel when looking at the rise in value and technological advantage that NVIDIA has created in the fields of machine learning, artificial intelligence, autonomous driving, and event high-performance compute. While Intel is on track to bring competitive options in those spaces in the coming years, it could be using the Kaby Lake-G part to compete with NVIDIA on its home turf: discrete graphics chip for gaming. NVIDIA essentially owns the mobile graphics market and (nearly) any machine currently shipping that is targeting gaming will include a discrete GeForce part in it. The Kaby Lake-G solution will be able to offer competitive performance with the largest area of the mobile gaming market (GTX 1050-1060 performance level) and Intel can leverage its combined package and design as an advantage. Not only that, but some good old price negotiations couldn’t hurt either.

And from my perspective, the design of Kaby Lake-G just makes it good at gaming. Intel has discussed openly its ability to balance CPU and GPU power draw to maximize performance efficiency for games, how it plans to properly support gamers with day-zero driver support for new titles, and quotes gaming performance as its flagship metric to the press. But the first wave of products based on the processor don’t seem to have even considered that angle.

This is why Intel’s decision to not focus on the gaming angle for the 8th Generation Core Processor with Radeon RX Vega M Graphics is odd and out of place. Yes, there are multiple uses for this new processor family but I think Intel, Dell, and HP are missing out on a crucial opportunity with gamers and the eager gaming environment .

Qualcomm Lays Out Plans for Future without Broadcom

In early November of last year, Broadcom submitted an unsolicited bid to purchase Qualcomm at a price of $70 per share, or roughly $130B. It would have sustained Qualcomm as the third largest chip maker in the world behind only Intel and Samsung, and would send a shockwave through the semiconductor and technology ecosystem as never before. The largest completed technology sector acquisition was Dell’s purchase of EMC for $67B; the Broadcom bid to purchase Qualcomm would double it.

However, Qualcomm’s board rejected the offer while promising shareholders value and direction for the company going forward. Fireworks ensued and Broadcom has now launched a hostile takeover that starts with a Board of Directors replacement, to be voted upon by shareholders. In an attempt to maintain its independence and direction, Qualcomm decided to go on the offensive and detail for the public its outlook and roadmap for the future, confident that the opportunities before it exceed what Broadcom has brought to the table.

There are numerous angles that have been written on how the merger of Broadcom and Qualcomm would have a negative impact on the industry. These range from the slowing of 5G progress, a lack of competing solutions in the networking space, slower introduction of new cellular technologies, a shifting R&D cycle, damaging cost reductions in product development, among others. Today I want to quickly touch on how Qualcomm has presented its future without Broadcom through a 35-minute long video and presentation this week. CEO Steve Mollenkopf and the top executives at Qualcomm have a substantial vision for where this company will be in just a few short years.

To be blunt, the position Qualcomm finds itself in today is unenviable. The licensing division remains in major dispute with its largest customer (Apple) and regulatory groups in several regions of the globe are investigating Qualcomm’s business models and licensing tactics. With licensing income being withheld by Apple and another supplier, Qualcomm may appear weak and ripe for acquisition. Though the bid from Broadcom was near the median of acquisition of offers for the technology industry (based on Bloomberg’s estimation), Qualcomm and its board believe that the next several years for Qualcomm warrant a much bigger discussion than Broadcom is willing to engage in.

The key to Qualcomm’s argument and fight against the buy-out stems from the growth it projects in the core mobile and adjacent growth markets. Even without the NXP acquisition completed, Qualcomm sees value in those adjacent markets for FY19 revenues. RF front-end development in tier-1 designs and the leading configurable front-end that will be optimized for 5G migration and continued 4G technology support, with a $2-3B revenue target, offer substantial windows for revenue. The automotive industry is growing in complexity with the advent of self-driving technology, but Qualcomm has $1B in opportunity for FY19 with its strength in telematics, Bluetooth, and infotainment systems. Though NVIDIA dominates headlines when it comes to autonomous driving and its high-performance graphics systems, and Qualcomm will need more engineering time target that segment, the surrounding systems are ripe for the performance and connectivity that the company can offer that see benefits from the power efficiency Qualcomm chips can provide.

Qualcomm is a leader in the IoT space with computing and connectivity options that others are unable to match. It currently works with more than 500 customers across voice and music, wearables, and even smart city integration. Qualcomm believes this will become a $2B revenue opportunity for them by FY19. The compute segment the recently announced Windows 10 based PCs, growing to another $1B in revenue for FY19. Qualcomm’s advantages in connectivity, 4G/5G, and the always on, always connected battery life gains provide stand out features that give it the potential to dislodge Intel in key market segments.

Finally, Qualcomm estimates the networking segment is has built for home and enterprise spaces will grow into another $1B revenue segment by FY19. Qualcomm and its partners are already the leader in home and enterprise wireless networks, and the spike in growth for mesh Wi-Fi networking will be a requirement to enable many carrier solutions and 802.11ax proliferation.

Beyond new product types are new product growth regions. China provides as much as $6B in potential product revenue in the mobile space and grew 25% YoY in FY17. That is two times the revenue that Qualcomm receives from Apple today, a statement clearly made to alleviate the concern that Qualcomm’s future is inexorably tied to the outcome of the Apple litigation and future relationship.

Because the Chinese OEMs are gaining market share globally, including the likes of Xiaomi, Vivo, and Oppo, with rapid expansion into India, Europe, and eventually the US, Qualcomm’s existing relationships with these customers will result in increased revenues. The China market is going through a transition and consumers are migrating to higher tiers of devices with more features and more capabilities. This movement favors the higher performance Snapdragon lineup when compared to compete options from MediaTek and will likely result in higher domestic share on Chinese OEMs product lines.

The migration to 5G will play a critical role in the growth of Qualcomm’s technological roadmap and the company believes it has a 12-24 month lead over its merchant competitors (those that sell to OEMs) in this space. Qualcomm is well known to be a driver and creator of new industry standards, with the push from 3G to 4G as a prime example. During that transition period of FY10 to FY13 Qualcomm revenue doubled and though the company won’t put specific estimates like that in place for the move from 4G to 5G, with that roll out starting in early 2019 it is the prime opportunity for Qualcomm connectivity advantage to be showcased.

From the licensing angle, Qualcomm believes that as much as 75% of the 4G patents it holds will be applicable to the 5G roadmap. This puts the company in a great position to leverage its previous technology R&D for future income.

These are just a handful of the emerging opportunities that Qualcomm sees before it in 2018 and through 2020. I didn’t even touch on the 6% annual growth of the Android ecosystem, which holds an has 80% share in smartphones, where Qualcomm is the chip leader. A target of $35-37B for FY2019 revenue is a daunting task and will require execution on multiple fronts for Qualcomm to meet that goal. But with the areas of growth outlined above, the offer from Broadcom stands in stark comparison to the reality of a company poised to cultivate the next generation of connected technologies.

Intel Gemini Lake hopes to hold off Windows on Snapdragon push

After last week’s pronouncement from Intel CEO Brian Krzanich claimed that “the world will run on Intel silicon,” the company has a lot of ground to cover to make that happen. One recent area of attack from the outside comes from Qualcomm and its push into the world of Windows PCs through the “Windows on Snapdragon” initiative. Using chips initially designed to target smartphones and tablets, Qualcomm is leveeing lower power consumption, a true connected standby capability, and connectivity improvements with an LTE modem to address one of the many markets that Intel has previously held dominance over.

Even with little experience in the world of Windows-based PCs and with silicon designs that are well understood to be built for smartphones first (at least in the initial implementations), Qualcomm’s Snapdragon is able to make a run at the lower tiers of notebook and convertible PC markets in large part due to Intel’s ambivalence. Intel has put seemingly little emphasis on the low power processor space, instead putting weight behind the “Core” family of products that provide the compute capability for higher-end notebooks, desktops, and enterprise servers.

Intel has tried various tactics in the low power space. It tried to revive the Intel Pentium architecture and modify it and also attempts to bring its “big” cores used in those higher performance processors down to a lower power rung. But doing so is difficult and puts great strain on the design engineers and production facilities to offer transistors that can perform optimally at both the high and low end of the performance spectrum. The result has been a family of products over several generations that have showed little improvement in performance, efficiency, or interest from Intel.

It is because of this lack of iterative performance improvement that Qualcomm has the ability to offer Snapdragon as a competitive alternative. Years of ignoring the space left a gap of air that competition could develop towards.

The mid-December announcement of the Pentium Silver and Celeron family of parts was met with very little fanfare, from either press or from Intel PR. Only after the release of architectural information in the form of software development documents do we find hope that the Goldmont Plus architecture that powers the Gemini Lake cores that power the new processors, may offer enough of an improvement in performance to make an impact. As the follow on to the Apollo Lake parts, Gemini Lake was expected to be just another refresh, but early performance metrics indicate that we may see as much as a 25% increase in IPC (instructions per clock) along with slightly increased clock rates. For multi-threaded workloads, on a chip that can integrate as many as four cores, benchmarks show a 45% increase over the previous generation.

Processors based on this design will sport TDPs starting at 6 watts, which is higher than where we expect the Snapdragon 835 in the first generation of Windows devices to operate. Intel does claim that the SDP, or Scenario Design Power, of a part like the Pentium Silver N5000 will be around 4.8 watts, indicating the power level at which Intel expects normal processing to occur. Close may not cut it though, as the importance of power consumption in standby states is going to be critical to the success of platforms in this class.

Maybe most interestingly is the addition of a CNVi, an integrated connectivity portion of the architecture. This is Intel’s attempt to simplify the integration and complexity of an RF chip and should allow OEMs and partners to more easily integrate high-speed WiFi and LTE/cellular connections. If this works out, Intel has clearly targeted the connectivity advantage that Qualcomm holds with its integrated X16 Gigabit LTE modem as a danger to its market leadership.

We will hopefully see the first wave of Gemini Lake powered notebooks and convertible at CES next month, though availability is looking closer to the February or March timeline. Even with announcements from key OEMs, usability testing will be needed to see how much performance the Goldmont Plus architecture truly delivers and if it can offer similar always-on, always-connected capability and battery life. No one expected Intel to sit back and let Qualcomm or other ARM processor vendors simply takeover a sizeable portion of any market, but we will sit back and see if Intel’s first attempt at a product response will hold any water.

Intel CEO Krzanich Predicts the World Will Run on Intel Silicon

In an internal memo to employees that surfaced earlier this week, Intel CEO Brian Krzanich told staff that the company’s new, aggressive strategy for 2018 would involve taking more risks and expanding the areas in which Intel can impact technology. Citing change as the “new normal,” the leader often called simply “BK” is telling his engineers, marketers, product managers, and sales teams to prepare for a very different Intel in the coming months. This is a change that many analysts in the tech community have been asking for, and one that Intel finally seems to be publicly committing to.

For the better part of the 2010s, Intel has played an interesting role in the technology field. Though it dominated in the PC space, most pundits viewed the corporate direction as somewhat aimless and lacking a drive to push forward in any one particular space. That included the client compute and enterprise groups, both of which enjoyed a stranglehold in market share. The Xeon group was maintaining near 99% saturation in traditional server platforms and the Core-branded processors were the dominant player in PC gaming, mobility, and professional computer segments.

During that time certain sub-groups noticed a lack of innovation. PC gamers and enthusiasts are a small but vocal segment of the PC ecosystem. It is this group that is the canary in the coal mine; when it starts to recognize the lack of change and performance or feature improvement, it is a tell-tale sign that the rest of Intel’s business units will suffer the same fate as the technology roadmaps progress.

As a result, Intel allowed several key competitors to re-enter valuable PC and server segments, threatening to take over market share and decrease Intel’s margins. AMD was able to offer new products like Ryzen for the consumer desktop, Threadripper for enthusiasts and workstation users, and EPYC for the enterprise and cloud, all of which will reduce the dominance of Intel in those specific fields. Qualcomm is also entering the cloud computing space with its own Centriq 2400 server processors based on the Arm platform, circumventing traditional processor designs with a more power-efficient implementation.

Intel has fallen behind in the current race for dominance in the machine learning and artificial intelligence fields. Once considered the pinnacle for any compute workload, Intel processors have been overshadowed by the designs from NVIDIA (and AMD to some degree) that started as graphics and gaming chips but have migrated and evolved to focus on the massive amounts of data and parallel processing required to develop intelligent systems. As a result, NVIDIA stock has skyrocketed, and CEO Jensen Huang is cited as a leading mind in compute fields in countless media representations.

Even the crown jewel of Intel’s operation, the capabilities of Intel silicon chip fabrication technology, is starting to show signs of worry. Intel has been building processors on the 14nm process node for much longer than originally planned or expected based on roadmap projections from years back. This is a result of either a lack of desire to push forth to the next-generation option (10nm) or the inability to build this new process node to satisfactory status to maintain margins and yields. Companies like Samsung, TSMC, and Global Foundries continue to push new process node tweaks to customers despite Intel’s stationary position. Samsung has been producing and shipping 10nm chips for customers like Qualcomm for nearly a full year while Intel will only start doing so sometime in mid-2018.

The memo from BK weighs in on the cultural changes necessary to push Intel over this speed bump with surprising candor. Krzanich says that the future of Intel will be as a “50/50 company,” one that sees half of its revenue from the stalwart PC space and half from “new growth markets.” These will include memory, FPGA (programmable arrays for faster development of new products), the Internet of Things, artificial intelligence, and even autonomous driving. The memo states in many of these areas Intel will be the underdog in the fight and that it will require the use of “new, different muscles” in order for the company’s products to have the impact he sees coming.

Part of flexing these new muscles will mean taking more risks, something that Intel has been extremely hesitant to do in the last decade. For a blue-chip company that has been as successful as it has for the last 50 years, trying new things will mean an increase in R&D budget along with the expectation and acceptance of failure in some portion of these new ventures. Being able to “determine what works and moving forward” is a mentality that exists naturally at organizations like Google but must be grown and matured manually at Intel.

BK mentions a term called the “One Intel,” calling it an “aggressive company.” Part of that shift is being driven by stock holders that demand Intel move forward to address the market of opportunity before it. No other company in the world has a portfolio of product and minds like Intel can offer and though it may make more mistakes than successes at first, there is little doubt what it CAN DO if the management and leadership is truly committed to the changes put forth in this memo.

Intel believes that there is as much as $260 billion worth of addressable market for it to go after in 2018 and beyond. It’s a pot of gold that many other talented and driven organizations are going after. But if you believe Intel’s CEO, “the world will run on Intel silicon.”

TITAN V launch strengthens machine learning lead for NVIDIA

Earlier this week, NVIDIA launched the Titan V graphics card at the NIPS (Neural Information Processing Systems) conference in Long Beach, to the surprise of many in the industry. Though it uses the same Volta architecture based GPU that has been shown and discussed and utilized in the Tesla V100 product line for servers, this marks the first time anything based on this GPU design has been directly available to the consumer.

Which consumer though, is an interesting distinction. With its $3000 price tag, NVIDIA positions the Titan V towards developers and engineers working in the machine learning fields, along with other compute workloads like ray tracing, artificial intelligence, and the oil/gas industry. With the ability to integrate a single graphics card into a workstation PC, small and growing businesses or entrepreneurs will be able to develop applications utilizing the power of the Volta architecture and then deploy them easily on cloud-based systems from Microsoft, Amazon, and others that offer NVIDIA GPU hardware.

Giving developers this opportunity at a significantly reduced price and barrier to entry helps NVIDIA cement its position as the leading provider of silicon and solutions for machine learning and neural net computing. NVIDIA often takes the top down approach to new hardware releases, first offering it at the highest cost to the most demanding customers in the enterprise field, then slowly trickling out additional options for those that are more budget conscience.

In previous years, the NVIDIA “Titan” brand has targeted a mixture of high-end enthusiast PC gamers and budget-minded developers and workstation users. The $2999 MSRP of the new Titan V moves it further into the professional space than the enthusiast one, but there are still some important lessons that we can garner about Volta, and any future GPU architecture from NVIDIA, with the Titan V.

I was recently able to get a hold of a Titan V card and run some gaming and compute applications on it to compare to the previous flagship Titan offerings from NVIDIA and the best AMD and its Radeon brand can offer with the Vega 64. The results show amazing performance in nearly all areas, but especially in the double precision workloads that make up the most complex GPU compute work being done today.

It appears that gamers might have a lot to look forward to with the Volta-based consumer GPU that we should see arriving in 2018. The Titan V is running at moderate clock speeds and with unoptimized gaming drivers but was still able offer performance that was 20% faster than the Titan Xp, the previous king-of-the-hill card from NVIDIA. Even more impressive, the Titan V is often 70-80% faster than the best that AMD is putting out, running modern games at 4K resolution much faster than the Vega 64. Even more impressive, the GV100 GPU on the card is doing this while using significantly less power.

Obviously at $3000, the Titan V isn’t on the list of cards that even the most extreme gamer should consider, but if it is indicative of what to expect going into next year, NVIDIA will likely have another winner on its hands for the growing PC gaming community.

The Titan V is more impressive when we look at workloads like OpenCL-based compute, financial analysis, and scientific processing. In key benchmarks like N-body simulation and matrix multiplies, the NVIDIA Titan V is 5.5x faster than the AMD Radeon RX Vega 64.

Common OpenCL based rendering applications use a hybrid of compute capabilities, but the Titan V is often 2x faster than the Vega graphics solutions.

Not every workload utilizes double precision computing, and those show more modest, but noticeable improvements with the Volta GPU. AMD’s architecture is quite capable in these spaces, offering great performance for the cost.

In general, the NVIDIA Titan V proves that the beat marches on for the graphics giant, as it continues to offer solutions and services that every other company is attempting to catch up to. AMD is moving forward with the Instinct brand for enterprise GPU computing and Intel is getting into the battle with its purchase of Nervana and hiring of GPU designer Raja Koduri last month. 2018 looks like it should be another banner year for the move into machine learning, and I expect its impact on the computing landscape to continue to expand, with NVIDIA leading for the foreseeable future.

AMD Making Strides with EPYC Server Platform

We all know that a competitive market is one that is healthy. Multiple options for your PC, your smartphone, and yes, your server, mean that every party involved needs to be more aggressive in development to outdo the competitor. For many long years, that type of environment did not exist in the server space and Intel was able to dominate the field with the Xeon processor family and almost no pressure from outside companies.

AMD announced its EPYC processor family this past summer and though it always takes time for adoption and ramp of a new enterprise-class technology, there has been more angst to see retail-ready releases from this launch than any other. Many questioned AMD’s ability to re-enter the server market with Intel’s 99%+ market share and strong grip on the hardware and software ecosystem. Any noise or promotion that might come from partners would be welcome and required for the community at large to have confidence in AMD’s claims.

HPE Claims Performance Records with EPYC

Last month we got one of our first, and likely most important to date. HPE not only announced its second family of servers that would be integrating AMD EPYC processors but did so with a press release touting record breaking performance and impressive claims across the board. For those that want the details: the HPE ProLiant DL385 Gen10 system with dual-socket EPYC processors broke records for two-socket systems on SPECrate2017_fp_base and SPECfp_rate2006. In short, the server showed impressive scaling in floating point performance by combining a pair of 32-core EPYC processors in this industry standard benchmark.

Additionally, HPE claimed that this platform would offer “up to 50% lower cost per virtual machine” compared to other dual-socket servers, thanks in part to the 4TB of addressable system memory and 128 lanes of available PCI Express provided by the AMD CPUs.

This is just one server family, and just one OEM (a big one), but this marks another milestone in AMD’s march back to relevancy in the server space. AMD CEO Lisa Su cautioned me recently that this would be a slow and arduous process, even if AMD was excited about the rate of adoptions it was seeing. It tells me that AMD is doing the right things, working with the right people, and has the right mindset and aggressive stance to make waves in the enterprise space once again.

Intel is Paying Attention

For its part, Intel is taking notice. Though it has a 99% market share in the server and data center space, Intel went to press last week with internal testing that compares its own Xeon Scalable processor family against the AMD EPYC platform. It runs through a myriad of benchmarks and concludes that Intel still offers an advantage in performance per core and many of the workloads and benchmarks that server professionals look to for guidance. Intel also questioned the performance consistency of AMD EPYC processors because of the complications surrounding its multi-die approach to core scaling (as opposed to a single, monolithic die that Intel utilizes).

It’s unlikely that any of the results that Intel presented to the media are “wrong” but the importance of the effort in my mind, is that Intel felt pressured enough to address this in a public fashion. All companies do competitive analysis on systems and hardware but rarely is that data presented in such a fashion to essentially “call out” that competing company and the coverage by media and analysts. It means that Intel sees a threat and is taking it seriously – something it hasn’t done in this space for nearly a decade.

AMD was upfront during its launch of EPYC that it would do very well in specific areas of the enterprise and datacenter space, but in others it would be behind what Intel can offer. That still seems like an accurate assessment though Intel is doing some of the heavy lifting to indicate where those “other” areas are. I still view EPYC as competitive in enough areas to retain its original value proposition and it appears that partners like Supermicro and HPE agree.

As we move forward, the future in the server space is brighter for customers thanks to a competitive landscape. Intel executives and financial bottom lines won’t appreciate any drop from the near-100% market share, obviously, but for the rest of us, seeing a healthy and active AMD in this space is a critical piece of the story of improvement and scalability for the datacenter. AMD should continue to see customer adoption and a resulting improvement in the financial status of the enterprise business unit

AMD Reenters High-end Markets with Threadripper and Vega

The high-end of the consumer market, often paralleled with the idea of prosumers and enthusiasts, if often overlooked as a segment with little import on the overall sales and profitability of technology companies. Though unit sales in this window are smaller than either the mobile or mainstream consumer space, the ASPs (average selling prices) skew high, resulting in much better profit margins than in lower segments. Not only do companies that successfully address the needs of the prosumer and enthusiast enjoy the ability to sell at lower financial risk, but there are also fringe benefits of being the market share and mindshare leader in these spaces.

The “halo effect” is one in which a flagship product that dominates headlines and performance metrics in the enthusiast markets sees benefits waterfall down into the more modestly priced hardware. Samsung softens a beneficiary of this idea, selling the Note and S-series of smartphones at high prices that convince those with smaller budgets to buy similar looking and feeling Samsung products once they enter the store. Influencers that do buy into the flagship product series will tout the superior benefits of these products to friends, family, and social groups. This gives confidence to system integrators, corporate buyers, and other consumers that the product they CAN afford will be similarly excellent.

In the PC field, prosumer and high-end segments will frequently reuse technology from workstation or data center class products. This saves on development time and costs, adding more to the profit margin of the already inflated segment.

For these reasons and more, it has been a weight around AMD’s ankle that it has not been competitive in either high-end desktop GPUs (graphics processors) or high-end desktop CPUs (central processors) for years. On the graphics front, the last high-end desktop (HEDT) product released was the Radeon Fury X, in June of 2015. Even at the time of launch, the product was moderately successful, bringing attention to AMD and the Radeon brand, but also difficulties in product reviews and quality control. In the span of the next 3-4 months, NVIDIA and its GeForce product family had completely retaken the leadership position with the never-challenged GTX 980 Ti product. Then in May of 2016, NVIDIA increased its leadership position with another GTX family launch. This happened again in March of 2017 when it thought AMD might be on the verge of releasing a competitive solution. It never showed.

The newly released Radeon RX Vega product line brings AMD back into the high-end prosumer and enthusiast picture, offering competitive pricing and performance against the NVIDIA lineup. It utilizes the same graphics architecture found in the workstation family Radeon Pro cards and the enterprise-class high-performance compute Instinct family. Though there are early reports of stock and availability problems that AMD is working through in the coming days, RX Vega gives AMD an opportunity to take back some amount of market in this influential space. The profit margin of RX Vega is questionable, with known cost concerns around the graphics processor and chosen memory solution.
In the CPU space, AMD has never been in the HEDT segment that was created by Intel in 2008. In fact, AMD has been absent from the majority of competitive processor segments for more than a decade, depending on the integrated graphics portion of its designs to keep it afloat in trying times. With the release of Ryzen 7 in March of 2017, AMD started making waves once again. August sees the launch of the new Threadripper family, a high-performance processor that directly targets content creators, developers, engineers, and enthusiasts. Prices on these parts range from $799 to $999 and because of heavy repurposing of server design, chip organization, and infrastructure will likely have exceedingly high-profit margins.

Threadripper doesn’t just make AMD competitive in a space that has previously been 100% dominated by Intel; it puts it in a leadership position that is turning heads. Performance in workloads for video creation, 3D rendering, ray tracing, and more are running better on the 16-core implementation that AMD offers compared to the 10-core designs that Intel is presently limited to.

While there are no guarantees of market share improvements or profitability, every unit sold of RX Vega, and Ryzen Threadripper mean improvements for AMD over Intel and NVIDIA. Intel product managers and executives, already awoken from slumber with Ryzen 7 in March have perked up, seeing the threat of mindshare, if nothing else. The company is wary of threats to its perceived dominance and will react with lower prices and higher performance options this year.

RX Vega is in a tougher spot, unable to come out as a clear winner in the field, even for a short while. NVIDIA has been sitting on a growing armory of designs and product, waiting to see how the competition would shake out to measure the need to release it. For now, NVIDIA doesn’t appear to be overly concerned about the impact Vega will have in the high-end consumer spaces.

No product portfolio is perfect, but the CEO Lisa Su and the executive team at AMD must be pleased with the recent shift in the company’s perception in the flagship markets for consumers. The Radeon group can finally point to RX Vega as being a reasonable option against all but the top-most GeForce offerings and managing to gain a performance to dollar advantage in part of it. For the processor division, Threadripper is a marvelous use of existing technology to address a market that has nothing but room to grow. The marketing and partnership opportunities have and continue to flow for AMD here, and Intel will be spinning for a bit to regain its footing.
There are significant hurdles ahead (continued graphics innovation, competing in the mobile processor space) but AMD is surging upward.

NVIDIA Uses AI to Bolster Professional Graphics

NVIDIA continues to bolster its position in the market with an emphasis on machine learning and artificial intelligence in addition to its leadership positions in graphics for mobile, consumer, and professional segments. At SIGGRAPH this week in Los Angeles, NVIDIA announced several new projects that aim to implement an AI angle to graphics specific tasks and workloads, showing the value of AI across a wide spectrum of workflows as well as the company’s leadership position for it.

The most exciting AI announcement came in the form of an update to the OptiX SDK that implements a denoising capability accelerated by AI with a ray tracing engine. Ray tracing has the capability to create highly realistic imagery but comes a high computational cost that forces renders to take minutes or even hours to create complex scenes in their entirety. When these images are in a partially computed state, they can appear to be noisy photographs, with speckled artifacts similar to what you see with photos taken in extremely low light.

NVIDIA and university researchers use deep learning and GPU computing to predict the final output images from those partly finished results in a fraction of the time. The AI model is created using many “known good images” that require time up front but then allow creators and artists the ability to move around the world, changing view angles and framing the shot, at nearly one tenth the speed. The result is a near real-time interactive capability with a high-quality ray traced the image to accelerate the artist’s capability and vision.

Facial animation is one the most difficult areas of graphics production. NVIDIA has found a way to utilize deep learning neural networks to improve the efficiency and quality of facial animations while saving creators hours of time. Instead of manually touching up live-action actors’ footage in a labor-intensive task, researchers were able to train the network for facial animations using only the actors’ footage in a matter of five minutes.

NVIDIA also implemented the ability to generate realistic facial animation from the resulting data with only audio. This tool will allow game creators to implement more characters and NPCs with realistic avatars in multiple languages. Remedy Entertainment, makers of the game Quantum Break, helped NVIDIA with the implementation and claim it can cut down on as much as 80% of the work required for large scale projects.

Anti-aliasing is a very common graphics technique to reduce the jagged edges on polygon models. NVIDIA researchers have also found a way to utilize a deep neural network to recognize the artifacts and replace them with smooth, color correct pixels.

Finally, NVIDIA adapted ray tracing with AI as well, using a reinforced learning technique to adjust the ray paths to those that are considered “useful.” Traces that are more likely to connect lights to virtual cameras (the view port) are given priority as they will contribute to the final image. Wasted traces that go to portions of the geometry that are blocked or unseen by the camera can be removed before the computation is done, lessening the workload and improving performance.

These four examples of AI being used to accelerate graphics workloads show us that the same GPUs used to render games to your screen can be harnessed uniquely to accelerate game and film creators. Requiring fewer man hours and resources for any part of the creation pipeline means developers can spend more time building richer environments and experiences for the audience. These examples are indicative of the impact that AI and deep learning will have on any number of markets and workflows, touching on much more than typical machine learning scenarios. NVIDIA paved the way to GPU computing with CUDA, and it continues to show why its investment in artificial intelligence will pay off.

AMD Puts More Pressure on Intel with Threadripper and Ryzen 3

With the release of the Zen architecture and the Ryzen product family for consumer PCs, AMD started down a path of growth in the processor market that it has been absent from for basically a decade. The Ryzen 7 processor family directly targets the Intel Core i7 line of CPUs that have been incredibly dominant and turns the market on its side by doubling core and thread counts at like price points. The platform surrounding the CPU was modernized, leaving very little on the feature list that AMD couldn’t match to Intel’s own. Followed by the Ryzen 5 launch a few weeks later, AMD continued the trend by releasing processors with higher core and thread counts at every price bracket.

More recently the EPYC server and data center processor marked AMD’s first entry for the enterprise markets since Opteron, a move that threatens at least some portion of the 99.5% market share that Intel currently holds. By once again combining higher core counts with aggressive pricing, EPYC will be a strong force in the single and dual-socket markets immediately, leaving the door open for further integration with large data center customers that see firsthand the value AMD can offer compared to the somewhat stagnant Xeon product family.

Though reviews aren’t launching for another couple of weeks, on Thursday AMD showed all of its cards for the summer’s hottest CPU launch, Ryzen Threadripper. With the hyper-aggressive naming scheme to go along with it, Threadripper will be a high-core-count processor and platform, based on the EPYC socket and design, targeting the high-end desktop market (HEDT) that Intel has had to itself for nearly that same 10-year window. Intel was the first to recognize the value of taking its Xeon product family, lowering features a slight degree, and then sell it to eager PC enthusiasts that want the best of the best. Families like Broadwell-E, Sandy Bridge-E, and most recently, Skylake-X that was released in June, have dominated the small, but very profitable, segment of the market for overclockers, extreme gamers, and prosumers that need top level multi-threaded performance.

CEO Lisa Su and CVP of marketing John Taylor took the wraps off the clocks, core counts, and prices in a video launched on the company’s YouTube page, along with a blog post from SVP of compute Jim Anderson, showing confidence in AMD’s message. Available in early August, Threadripper will exist as a 12-core/24-thread 1920X with frequencies as high as 4.0 GHz for $799 AND as a 16-core/32-thread 1950X hitting the same 4.0 GHz for $999. No doubt these are high costs for consumer processors, but compared to the competing solutions from Intel, AMD is pricing them very aggressively, following the same strategy that has caused market disruption with the Ryzen 7 and 5 releases. Intel’s $999 Core i9-7900X is a 10-core/20-thread part, putting it at a disadvantage for multi-threaded workloads despite having an advantage in single threaded performance based on architectural design.

Impressive speeds aside, what does this mean for AMD as we get into the heat of summer? I expect Ryzen Threadripper to be a high-demand product compared to the Skylake-X solution, giving AMD the mindshare and high margin space to continue seeing the benefits of its investment in Zen and the Ryzen family. Intel had already reacted to the Ryzen 7 launch with price drops and adjustments to the timing of Skylake-X but arguably not to the degree necessary to maintain price-to-performance leadership across the board. Threadripper will offer heavy multi-taskers, video editors, 3D animators, and other prosumer style users a better solution at a lower price point based on performance estimates.

AMD did release one benchmark metric for us to analyze until full reviews come out in August. Cinebench R15 is an industry standard test that runs a ray-traced rendering pass on a specific data set, timing it to generate a higher-is-better score. The Core i9-7900X, the current flagship part from Intel that sells for $999, generates a score of 2186. The upcoming Threadripper 1920X (12-cores) scores 2431, 12% higher than the Intel processor that costs $200 more. Like-for-like pricing competition from the Threadripper 1950X (16-cores) scores 3046, a full 39% faster than what Intel currently has on the market.

Intel has on its roadmap for releasing Skylake-X parts up to 18-cores but we won’t see them until September or October, and prices there hit as high as $1999.
Along with the high-end desktop announcement of Threadripper, AMD also revealed some details on the Ryzen 3 processor SKUs that will offer direct competition to the high volume Intel Core i3 family. The Ryzen 3 1200 will have 4-cores, 4-threads and a clock speed running up to 3.4 GHz while the Ryzen 3 1300X is 4-core/4-thread and a 3.7 GHz peak frequency. The advantages that AMD offers here remain in line with entire Ryzen family – higher core counts than Intel at the same or better price. The Core i3 line from Intel runs 2-core/4-thread configurations, so Ryzen 3 should offer noticeably better multi-threaded performance with four “true cores” at work. No pricing information is available yet, but the parts should be on store shelves July 27th, so we will know soon.

In the span of just five months, AMD has gone from a distant competitor in the CPU space to a significant player with aggressive, high-performance products positioned to target market share growth. The release of Threadripper will spike the core-count race in consumer devices, enabling further development for high-performance computing but also gives AMD an avenue for higher-margin ASPs and the “halo product effect” that attracts enthusiasts and impacts the buying decisions for all product families below it. AMD has a long way to go to get back to where it was in 2006 but the team has built a combination of technology and products that might get it there.

AMD and NVIDIA Target Miners with Specific Hardware, Longer Production Times

The current state of the cryptocurrency mining rush is in a delicate state. The values of Bitcoin, Ethereum, and other smaller currencies have stalled out on the rocket-like trajectory they were on last month and have settled into a slower, more moderate cycle of growth. Last week I wrote a story that warned of a pending backfire for those betting heavily on the hardware portion of the mining craze, and I stand by the risk that AMD and NVIDIA must address as we prepare for the stabilization of mining difficulty that will make GPU-based usage models inefficient.

The early wave of sales spikes on graphics hardware were done at the previous pricing models but both AMD and NVIDIA are attempting to improve their return on cryptocurrency sales by raising GPU prices to partners in line with current market sales. Previously only the consumer facing resellers were seeing the advantages of the higher pricing, and it was only a matter of time before NVIDIA and AMD took their share. While in theory this might affect the MSRP for these parts in the GeForce and Radeon lines, in practice the current elevated prices will remain. Expect NVIDIA and AMD to lower their temporary price hikes when we see the demand for these cards die down.

In the last couple of days however, both AMD and NVIDIA add-in card partners began listing and selling mining-specific cards that separate themselves through reduced feature sets and lower pricing. NVIDIA is offering both GP106 and GP104 based hardware, equivalent to the mid-range GTX 1060 and high-end GTX 1070/1080 gaming cards, though without the branding to indicate it. Partners like ASUS, EVGA, MSI, and others are being very careful to NOT call these products by the equivalent GeForce brands, instead using something equivalent to “ASUS MINING-P106-6G”. To a seasoned miner, the name gives enough information to estimate the performance and value of the card but tells customers looking for gaming hardware that this one is off limits. Why? Many of these are being sold without display output connections, making them less expensive but nearly unusable for any purpose other than compute-based cryptocurrency mining.
AMD has partners offering similar options, some with and some without display output connectivity. The first wave are based on the Radeon RX 400-series of GPUs rather than the current RX 500 products.

These new offerings allow both AMD and NVIDIA to take advantage of the mining market to sell an otherwise untenable product. For AMD, after the launch of its RX 500-series of cards in April, any inventory of the RX 400-series needed to be sold at steep discounts or risk being held in warehouses for months. By targeting these products to mining directly, where they are still among the most power and dollar efficient for the workload, AMD can revive the product line without sacrificing as much of the price.

In other cases, for both AMD and NVIDIA, the ability to sell head-less graphics boards (those without display connectivity) offers the chance to sell GPUs that might have otherwise been sent to recycling. As silicon is binned at the production facility, any GPU without fully operational display engines would be useless to sell to a gamer but can operate as part of a cryptocurrency mining farm without issue. This means better margins, more sales, and overall more efficient product line moving forward.

Producing mining-specific cards should also benefit AMD and NVIDIA in the longer view, assuming they can make and sell enough for it to be effective. Because headless GPUs are not useful to the gaming community, they cannot be a part of the flood of products into the resale market to impact the sales of legitimate, newer gaming hardware from either party. This dampens the threat to GPU sales in the post-mining bubble, but only to the degree that AMD and NVIDIA are successful in seeding this hardware to the cryptocurrency audience.

The quantity of these parts is the biggest question that remains. Initial reports from partners indicate that only a few thousand are ready to sell, and mostly in the APAC market where the biggest farms tend to be located. But I am told that both vendors plan to ramp up this segment rather quickly, hoping to catch as much of the cryptocurrency wave as possible. AMD in particular has extended its Polaris GPU production through Q1 of 2018, a full quarter past original expectations. This is partially due to the outlook for the company’s upcoming high-end Vega architecture but also is a result of the expected demand for GPU-based mining hardware this year. AMD appears to be betting heavily on the mining craze to continue for the foreseeable future.

How the Cryptocurrency Gold Rush Could Backfire on NVIDIA and AMD

The effects of the most recent cryptocurrency mining phase are having a direct impact on various markets, most notably on the GPU product lines from NVIDIA and AMD. Without going into the details of what a cryptocurrency is or how it is created and distributed on a shared network, you only need to understand that it is a highly speculative market and gold rush that is accelerated and profitable because of its ability to run efficiently on graphics cards usually intended for the PC gaming markets. Potential investors need only purchase basic PC components and as many GPUs as they can afford to begin a mining operation with the intent to turn a profit.

As we look at the sales channels today, AMD Radeon graphics cards from the current and previous generation of GPU are nearly impossible to find in stock, and when you do come across them, they are priced well above the expected MSRP. This trend has caused the likes of the Radeon RX 580, RX 570, RX 480, and RX 470 to essentially disappear from online and retail shelves. This impact directly hit AMD products first because its architecture was slightly better suited for the coin mining task while remaining power efficient (the secondary cost of the mining process). But as the well dries up around the Radeon products, users are turning their attention to NVIDIA GeForce cards from the Pascal-based 10-series product line and we are already seeing the resulting low inventory and spiking prices for them as well.

Positive Impacts

For AMD and NVIDIA, as well as their add-in card partners that build the products based on each company’s GPU technology, the coin mining epidemic is a boon for sales. Inventory that might have sat on store shelves for weeks or months now flies from them as soon as they are put out or listed online, and reports of channel employee-driven side sales are rampant. From the perspective of this chain, GPU vendor, card vendor and reseller, a sale of a card is never seen as a negative. Products are moving from manufacturers to stores and to customers; the goal of this business from the outset. Cryptocurrency has kept the AMD Radeon brand selling even when its product stack might not be as competitive with NVIDIA as it would like.

This trend of GPU sales for coin mining is not going unnoticed by the market either. Just today a prominent securities fund moved NVIDIA’s stock to “underweight” after speaking with add-in card vendors about stronger than expected Q2 sales. AMD’s stock has seen similar improvement and all relevant indicators show continued GPU sales increases through the next fiscal quarter.

Negative Impacts

With all that is going right for AMD and NVIDIA because of this repurposed used of current graphics card products lines, there is a significant risk at play for all involved. Browse into any gaming forum or subreddit and you’ll find just as many people unhappy with the cryptocurrency craze as you will happy with its potential for profit. The PC gamers of the world that simply want to buy the most cost-effective product for their own machines are no longer able to do so, with inventory snapped up the instant it shows up. And when they can find a card for sale, they are significantly higher prices. A look at Amazon.com today for Radeon RX 580 cards show starting prices at the $499 mark but stretching to as high as $699. This product launched with an expected MSRP of just $199-$239, making the current prices a more than 2x increase.

As AMD was the first target of this most recent coin mining boon, the Radeon brand is seeing a migration of its gaming ecosystem to NVIDIA and the GeForce brand. A gamer that decides a $250 card is in their budget for a new PC would find that the Radeon RX 580 is no longer available to them. The GeForce GTX 1060, with similar performance levels and price points, is on the next (virtual) shelf over, so that becomes the defacto selection. This brings the consumer into NVIDIA’s entire ecosystem, using its software like GeForce Experience, looking at drivers, game optimizations, free game codes, inviting research into GeForce-specific technology like G-Sync. For Radeon, it has not lost a sale this generation (as the original graphics card that consumer would have bought has been purchased for mining) but it may have lost a long-term customer to its competitor.

Even if the above problem fades as NVIDIA cards also become harder to find, NVIDIA has the advantage of offering current generation, higher cost products as an option to PC gamers. If a user has a budget of $250 and finds that both the GeForce and Radeon options are gone to the crypto-craze, NVIDIA has GeForce GPUs like the GTX 1070 and GTX 1080 that are higher priced, but more likely to be at their expected price point (for now). AMD has been stagnant at the high end for quite some time, leaving the Radeon RX 580 as the highest performing current generation product.

Alienating the gaming audience that maintains both Radeon and GeForce from year to year is a risky venture, but one that appears to be impacting AMD more than NVIDIA, for now.

Other potential pitfalls from this cryptocurrency market come into play when the inevitable bubble reaches its peak. All mining operations get more difficult over time, on the order of months, and make the profitability of mining coins much lower and requires significantly more upfront investment to turn a profit. The craze surrounding mining is driven in large part by the many “small” miners, those that run 10-30 cards in their home. Once the dollar figures start dropping and the hassle and cost of upkeep becomes a strain, these users will (and have in the past) halt operations.

This has several dangers for AMD and NVIDIA. First, inventory that may be trying to “catch up” to the cryptocurrency mining levels of sales rates could be caught in the line of fire, leaving both GPU vendors and their partners holding product in their hands than they cannot sell. Second, the massive amounts of hardware used for mining purposes will be found on the resale markets like eBay, Amazon, and enthusiast forums. Miners no longer interested in cryptocurrency will be in competition now to sell the RX 580s they have amassed as quickly as possible, dropping the value of the product significantly. If AMD or NVIDIA are in a roll-out mode for a new generation of product at that time, that means new product sales will be directly impacted as slightly older hardware at a great value is suddenly available to that eager gaming audience.

As for a more direct financial risk, both company’s stocks risk corrections when this mining bubble breaks down.

The disappointing part of this situation is that neither AMD or NVIDIA can do anything to prevent the fallout from occurring. They could verbally request miners leave products for gamers, but it would obviously stop nothing. A price hike would only hurt the gaming community more as miners are clearly willing to invest in GPUs when they are used for profit. And trying to limit mining performance with firmware or driver changes would be thwarted by an audience of highly intelligent mining groups with re-flashes and workarounds.

The rumors of both vendors offering mining-specific hardware appear to be true, selling headless (without display connectors) graphics cards is perfect for crypto mining and makes them unusable for gaming. This allows NVIDIA and AMD to use previously wasted GPUs that might have had a fault in the display engine for example. But would not be enough of a jump in inventory to open standard cards for gamers. If anything, the mining community would simply swallow that as well.

The cryptocurrency market may not be a bubble, but the GPU-based mining operations that exist today certainly are. And the long-term impact that it will have on both AMD and NVIDIA will be a negative one. For today, all parties involved will enjoy high sell through, increased ASPs, and happy investors. But the writing is on the wall from previous instances of this trend to know that there will be fallout. The question is only how much it will impact future product and which GPU vendor is capable of balancing current benefits with long-term detriment.

AMD and Intel Race Towards High Core Count CPU Future

As we prepare for a surprisingly robust summer season of new hardware technologies to be released to the consumer, both Intel and AMD have moved in a direction that both seems inevitable and wildly premature. The announcement and pending introduction of high core count processors, those with many cores that share each company’s most modern architecture and design, brings with it an interesting combination of opportunity and discussion. First and foremost, is there a legitimate need for this type of computing horsepower, in this form factor, and secondly, is this something that consumers will want to purchase?

To be clear, massive core count CPUs have existed for some time but in the server and enterprise markets. Intel’s Xeon line of products have breached the 20-core count in previous generations and if you want to dive into Xeon Phi, a chip that uses older, smaller cores, you will find options with over 70 cores. Important for applications that require a significant amount of multi-threading or virtualization, these were expensive. Very expensive – crossing into the $9000 mark.

What Intel and AMD have begun is a move to bring these high core count products to consumers at more reasonable price points. AMD announced Threadripper as part of its Ryzen brand at its financial analyst day, with core counts as high as 16 and thread counts of 32 thanks to SMT. Then at Computex in Taipei, Intel one-upped AMD with its intent to bring an 18-core/36-thread Skylake-X CPU to the new Core i9 lineup. Both are drastic increases over the current consumer landscape that previously capped out at 10-cores for Intel and 8-cores for AMD.

Let’s first address the need for such a product in the world of computing today. There are many workloads that benefit easily from multi-threading and consumers and prosumers that focus in areas of video production, 3D rendering/modeling, and virtualization will find single socket designs with 16 or 18 cores improve performance and scalability without forcing a move to a rackmount server infrastructure. Video encoding and transcoding has long become the flagship workload to demonstrate the power of many-core processors. AMD used that, along with 3D rendering workloads in applications like Blender, to demonstrate the advantages of its 8-core Ryzen 7 processors in the build up to their release.

Other workloads like general productivity applications, audio development, and even PC gaming, are impacted less by the massive core quantity increases. And in fact, any application that is heavily dependent on single threaded performance may see a decrease in overall performance on these processors as Intel and AMD adjust clock speeds down to fit these new parts into some semblance of a reasonable TDP.

The truth is that hardware and software are constantly in a circular pattern of development – one cannot be fully utilized without the other. For many years, consumer processors were stuck mostly in a quad-core rut, after an accelerated move to it from the single core architecture days. The lack of higher core count processors let software developers get lazy with code and design, letting the operating system handle the majority of threading operations. Once many-core designs are the norm, we should see software evolve to take advantage of it, much as we do in the graphics market with higher performance GPUs pushing designers forward. This will lead to better utilization of the hardware being released this year and pave the road for better optimization for all application types and workloads.

From a production standpoint Intel has the upper hand, should it chose to utilize it. With a library of Xeon parts built for enterprise markets already slated for release this year and in 2018, the company could easily bring those parts to consumers as part of the X299 platform rollout. Pre-built, pre-designed and pre-validated, the Xeon family were already being cannibalized for high-end consumer processors in previous generations, but Intel capped its migration in order to preserve the higher prices and margins of the Xeon portfolio. Even at $1700 for the 10-core 6950X processor, Intel was discounting dramatically compared to the Xeon counterpart.

Similarly, AMD is utilizing its EPYC server product line for the Threadripper processors targeting the high-end consumer market. But, AMD doesn’t have large market share of workstation or server customers to be concerned about cannibalization. To them, a sale is a sale, and any Ryzen or Threadripper or EPYC sold is an improvement to the company’s bottom line. It would surprise no one if AMD again took an aggressive stance on pricing its many-core consumer processors, allowing the workstation and consumer markets to blend at the top. Gaining market share has taken precedent over margins for AMD; it started as the initiative for the Polaris GPU architecture and I think it continues with Threadripper.

These platforms will need to prove their value in the face of dramatic platform requirements. Both processor vendors are going to ship the top performing parts with a 165-watt TDP, nearly double that of the Ryzen and Kaby Lake desktop designs in the mainstream market. This requires added complexity for cooling and power delivery on the motherboard. Intel has muddied the waters on its offering by varying the number of PCI Express lanes available and offering a particular set of processors with just four cores, half the memory channels and 16 lanes of PCIe, forcing platforms into convoluted solutions. AMD announced last week that all Threadripper processors would have the same 64 lanes of PCIe and quad-channel memory support, simplifying the infrastructure.

With that knowledge and assumption in place, is higher core count processing something that the consumer has been asking for? Is it just a solution without a problem? The truth is that desktop computers (and notebooks by association) have been stuck at 4-cores in the mainstream markets for several years, and some would argue artificially so. Intel, without provocation from competent competing hardware from AMD, has seen little reason to lower margins at the expense of added performance and capability in its Core line. Even the HEDT market, commonly referred to as the E-series (Broadwell-E, Ivy Bridge-E and now Skylake-X) was stagnant at 8-cores for longer than was likely necessary. The 10-core option Intel released last year seemed like an empty response, criticized as much for its price ($1700) then praised for its multi-threaded performance.

AMD saw the opportunity and released Ryzen 7 to the market this year, at mainstream prices, with double the core count of Intel Core parts in the sub-$400 category. The result has been a waterfall of an effect that leads to where we are today.

Yes, consumers have been asking for higher core processors at lower prices than they are currently available. Now it seems they will have them, from both Intel and AMD. But pricing and performance will have the final say on which product line garners the most attention.

The Windows Opportunity for Qualcomm

With both Microsoft and Qualcomm publicly discussing the re-emergence of mobile processors from Qualcomm running the Windows consumer desktop operating system, it seems as good a time as any to dissect what this might mean for the industry and the major players involved. Windows 10 running on Qualcomm processor platforms in tablets and notebook form factors brings with it some incredible opportunities for all involved, including the consumers they are targeting, with a focus on areas the Windows+Intel relationship has neglected for some time. But with that comes substantial risk and many avenues of potential conflict when these systems begin to hit the market the end of this year.

The Opportunity
Let’s start with the positive outlook and dive into why the Qualcomm and Windows 10 story makes sense and how both the market and consumers will benefit from this new arrangement. Differentiation is the key to success and Qualcomm is well aware this move into the notebook consumer and enterprise markets requires more than just creating another line item on a CDW quote sheet. Even though the first thought for many analysts will be to measure the effective benefit of power consumption and battery life, for me the most important aspect of Qualcomm’s venture is in connectivity. Each and every mobile platform built with Qualcomm processors, starting with the Snapdragon 835, will ship with a Gigabit-class LTE X16 modem. While a very small and niche market of enterprise users have WAN connectivity today, to the general consumer a cellular-connected notebook is a first.

Depending on how Qualcomm, Microsoft, the partner OEMs and the carriers work this all out, messaging around LTE connectivity at Gigabit-class speeds is a considerable advantage. Not worrying about Wi-Fi hotspots in airports, restaurants, classrooms and offices and instead relying on the same service and capability as your smartphone without the hassle of hotspots will bring about a revolution in the connected notebook. How cell phones changed our expectations of mobile communication, so will the always-connected capability of a Windows 10 machine to business and consumer users.

Battery life advantages should not be overlooked though. The Snapdragon SoC has the potential to draw much less power than the competing Intel Core m3 series of processors if configured correctly while still offering enough performance to alleviate concerns in the Windows environment. Battery life is a key driver of notebook innovation, along with form factor and design, and though details are sparse today, Qualcomm should be able to bring an improvement here.

Though as much of a challenge as an opportunity, Qualcomm has the ability to spurn the Windows RT fiasco with this launch and put products out, with OEM partners, that are seen as high quality. This means good displays, solidly constructed bodies, high-performance I/O, and trackpads that rival the best in the space. All of this is going to be necessary to pull in customers used to the standards of machines like the Dell XPS 13, Microsoft Surface or HP Spectre. Working with partners to avoid low-cost, flimsy feeling hardware re-opens the premise that Qualcomm and ARM-based platforms are viable work machines.

The combination of Gigabit-class LTE connectivity and longer battery life, coupled with Windows 10 (and universal application support) opens a door for Qualcomm to walk thorough and take a seat at the table.

Potential Problems
As much as the connectivity story of Qualcomm and the X16 modem could be the driving force behind adoption based on the reaction and integration with carriers, it could also be a hindrance to consumers. In a space with limited capacity data plans and speeds that get throttled based on usage, not to mention the potential conflicts pending from net neutrality governmental debates, if the data package isn’t sold fairly and easily, consumers could balk. An always-connected device is useless if you have to jump through hoops to get it connected or if the cost is overly prohibitive for any non-executive class buyer.

Even with the leveling off of performance for the Windows notebook market, Qualcomm and Microsoft are going to need to address the question of performance on this hardware. Does the Snapdragon platform have enough horsepower to drive the operating system and its native Windows Store applications? And how does the binary translation/emulation layer affect the user experience for the non-AWP programs? Qualcomm doesn’t have to win in this area, just be close enough to get the job done. The bar will be set at different levels for different target demographics – education, business, casual consumer – but overcoming the previous Windows RT performance stigma should be at the top of Qualcomm’s list.

Another big problem comes from the primary competition here: Intel. Intel has had the notebook space to itself for a very long time but that does not mean it has become complacent. Even though Qualcomm was able to prove to be the better design and architecture for mobile devices, it is moving into Intel’s home turf now. Though the 10nm advantage Qualcomm has with Snapdragon means something, the tight integration of Intel and its in-house manufacturing give it an ability to adapt that few companies in this world can. Taking physical and architectural capability out of the discussion, Intel also has considerable financial and channel advantages over Qualcomm. Marketing funds and inventory balancing are subtle tools utilized throughout the PC market and Qualcomm hasn’t proven it has the ability or desire to engage in the same way.

Expectations
Ironically, many of my expectations for the Qualcomm and Windows partnership in 2017 will depend on outside forces at work. With the recent announcement of Windows 10 S, a free version of Microsoft’s OS that targets not only low-cost Chromebook competitors but even ~$1000 notebooks for consumers, Qualcomm may feel the need to address the $600+ range of machines with Snapdragon and the sub-$400 market to take advantage of Microsoft’s re-engagement with the education community. Attempting to cover too many areas on initial launch could be a huge drain on resources and I would expect Qualcomm to instead sell a product with premium, and unique, feature sets.

Which leads me to another prediction around carriers. Though nothing has been announced yet, increased competition between all the US and international cellular providers, and the marketing desire to push the value of a Gigabit-class network, should result in multiple players coming in and adopting aggressive pricing and bandwidth strategies for notebook users. If this happens, and Qualcomm and Microsoft can make the purchasing/activation process of LTE-connected Windows 10 notebooks a straightforward and simple process, it could mean the beginning of an entirely new category of productivity hardware.

AMD Ryzen 5 Launch Signals Competition in Mainstream Market

This week, AMD released the latest in its family of Zen processors, the Ryzen 5 series. Targeted at DIY consumers and OEMs with retail prices ranging from $169 to $249, Ryzen 5 can address a much wider segment of the market than the Ryzen 7 processors launched last month that are priced as high as $499. The competing Intel processors in the Core i5 family sit in essentially the same price segment of the market but AMD Ryzen has a significant advantage in thread count with all released parts enabling multi-threading. Though Zen is at a deficit in per-clock performance compared to Intel’s Kaby Lake, a 2-3x improvement in threading capability offers substantial headroom for application performance.

Platform Value
Intel has had years of consumer mind share and channel market share in this segment without competition and AMD understands it needs to do more than just equalize metrics to make any significant market share moves. On top of the thread and core count advantages Ryzen 5 offers over Core i5, the chipset and motherboards based on the AMD B350 chipset offer value-adds. The B350 chipset includes the ability to overclock both the CPU clock speed and memory for the Ryzen platform, all while adding support for interface technologies like M.2 NVMe SSDs and USB 3.1 connectivity. Intel’s competitive solution for low-cost motherboards is the B250 chipset but it locks consumers out from overclocking of any kind.

It’s good AMD they decided to make the decision to allow for overclocking the B350 chipset. Testing has proven that increased DDR4 memory speeds can have a dramatic impact on the performance of some applications, especially games. Given the controversy surround the Ryzen 7 processor and gaming, any avenue AMD can offer to improve this area is welcome.

Consumer Performance
Direct performance comparisons of Ryzen to Core start with the Ryzen 5 1600X and the Core i5-7600K. Having 6 cores and 12 threads on the 1600X gives AMD performance leads over the 7600K (4 cores and 4 threads) we haven’t seen in nearly a decade when the Athlon first hit the market. Applications like Blender (used for 3D rendering) and Handbrake (for media creation and transcoding) show the power multi-threaded workloads can tap into on a Ryzen CPU. Even the 4 core, 8 thread Ryzen 5 1500X (priced $60 lower than the 1600X) is able to outpace the Intel CPUs in this segment.

Single threaded performance still belongs to Intel and its Kaby Lake architecture. Synthetics and a few applications like Audacity audio encoding bear this out and, though there aren’t many benchmarks that make the case, real-world experience and user interfaces are very often single thread limited.

One of the Achille’s heels of AMD’s initial Ryzen 7 processor launch centered on PC gaming at lower resolutions like 1080p. The story remains mostly the same for Ryzen 5 where the Core i5-7600K demonstrates better performance in most of our testing. In a few cases, particularly with “Ashes of the Singularity” and “Hitman”, the Ryzen 5 1600X is able to hold its own, matching the results from Intel. AMD was able to show the potential benefits of optimizing game engines for Ryzen through the Ashes developers, netting a 31% overall improvement at peak. The difficulty for AMD will be getting a wide array of game developers and engine developers to do the same and spend time and money to make the changes necessary for more highly threaded processors.


Intel Reaction
Intel, for its part, has remained publicly silent about the moves AMD is making with Ryzen. Many in the industry and DIY community have accused Intel of sitting on the market, unmoved to improve performance in the areas important to them without competition to push them down the path. The validity of that opinion is tempered by knowing Intel has focused most of its resources on the mobile markets (both smartphone/tablet and notebooks). Both process technology innovations and architectural shifts on Intel processors have been built to lower power consumption and improve instantaneous performance.

There is some buzz that Intel might be moving up the roadmap for forthcoming refresh processors in the desktop space to address the competition. I do not expect Intel to adjust current pricing of Core i5 or Core i7 processors in response to AMD but I do see Intel making specification and price adjusts with the next-generation processors to accommodate the evolution Ryzen has brought to the market. Expect more cores, more threads, and lower prices from Intel.

AMD has been able to deliver on its promise of a competitive consumer processor with both Ryzen 7 and Ryzen 5. Though it suffers from a potential pitfall with gaming performance currently, in any multi-threaded workload, Ryzen 5 stands out from Core i5 and does so in a dominating fashion. As the consumer software space continues to adapt to multitasking and highly threaded application workloads (AI, computer vision), AMD will continue to have the advantage.