China Tariffs Will Impact the PC Components Market

One of the better parts of working in the technology industry is that it transcends politics. Most of the time. With the advent of the tariffs that have progressively been put in place by the Trump administration on goods produced in China, that has shifted. What started with an emphasis on materials like steel has now muddied the water for the PC ecosystem, and in particular, the segment that depends on individual components like DIY consumers and enthusiast gamers.

The impact of tariffs on the final pricing of products for consumers in the PC space has been discussed for months at this point, with several component vendors proactively reaching out to the media. The hope is that by talking openly about the situation and how it might affect the market that we could prepare readers for the future they are going to be a part of, limiting surprise and anger. Or at least, redirecting it towards the politicians and policy makers rather than channel partners and manufacturers.

Coverage of the tariffs has been steady, but I would argue not aggressive enough. Outlets like GamersNexus have done a great job of laying out the details while also presenting the thoughts of hardware vendors telling their side of the story. Mainstream outlets like CNBC have also touched on the subject with a lean towards the financial implications to major players like NVIDIA, AMD, and Intel.

The biggest component of concern when it comes to the tariff implementations is graphics cards. These are easily the most expensive products (on average) produced in China that will see price increases with the tariff implementation. NVIDIA and AMD will both be impacted to some degree, but it will be how they plan to work with board partners that produce, ship, and sell to the end-user that will be most interesting. It is well understood that NVIDIA and AMD make the majority of the margin in the graphics card and GPU pipeline, with companies like EVGA and ASUS making less than 10% (and supposedly MUCH less in many cases). The tariffs are technically imposed on the company that brings the goods into the US for sale, meaning that the board vendors are on the hook.

But graphics cards aren’t going to be the only casualty here. Most every level of component will have tariffs applied, from power supplies to computer chassis to motherboards. Even coolers and storage devices are on the list. This translates into higher import costs for companies like Corsair, NZXT, Gigabyte, MSI, and many others, throughout their product stacks. Expect to see higher prices that are passed on to the consumer because of these trade policy changes.

Impact of this might extend even beyond the components and companies with direct ties to the tariffed products. We saw a similar situation during the mining craze of 2018, where secondary components saw reduced demand and sales because higher prices on graphics cards were driving away upgraders and new system builders. I foresee the same thing occurring here – if the cost of graphics cards, motherboards, cases, and power supplies all go up by 10-25% in the next quarter, resulting sales of complimentary products like CPUs, memory, keyboards, and accessories could drop.

Interestingly, the tariffs in place with this third iteration of Trump’s policy do not affect complete, pre-built systems. This means that computers that ship from China with a case, GPU, power supply, etc. included and assembled are not required to pay the 25% import fees. Though not useful for large scale operations, it could be an avenue for smaller unit transactions to save.

For these vendors and manufacturers that now must balance the bottom line of their financials with the goodwill and consumer impact of big price hikes, there are only a few options they can consider. The first of which is a direct cost pass-through to the consumer. Rather than paying $400 for that new Radeon or GeForce graphics card, you should expect to see $500. Future product releases will likely include the tariffs in stated MSRPs, something that Europeans are familiar with because of VAT.

The short-term solution in the ramp up to the initial 10% tariff and the pending 25% tariff on these products was to import as much product from China prior to the tax implementation. Companies that had the capability were (and still are) ramping up production and shipping to hoard as many power supplies, cases, and motherboards as possible in stateside warehouses. But being the field that it is, technology changes quickly with new chipset, graphics processors, and designs releasing frequently. It’s near impossible to speed up development of something like a new line of graphics chips to prevent additional taxation.

A long-term option will be for these companies to move production facilities to countries other than China. Ironically, that is the goal of such a tariff, to encourage these vendors to manufacture more in the United States. But no company I spoke with, or that I have seen quoted anywhere else, has indicated that would be the best option. Instead I am hearing that board production will see increases in other Asian countries like Singapore, Vietnam, and even some increase in Taiwan. But this kind of move takes time, months at least, years perhaps.

For now, vendors appear to be pessimistic on the outlook for tariff resolution. Enthusiasts and DIY consumers are equally concerned about how this will affect them. For better or worse, this problem is much bigger than graphics cards and motherboards, and our market is simply caught up political storm of our time.

Verizon Campaigns Confusion with 5G Internet Service

This week Verizon became the first company to deploy a 5G service for consumers. Rolling out in Houston, Indianapolis, Los Angeles, and Sacramento, it is called Verizon 5G Home and promised to bring speeds “up to 1 Gbps” for internet access over cellular wireless technology. Service should “run reliably” at around 300 Mbps, peaking at that 940 Mbps level during times of low utilization and based on your homes’ proximity to the first 5G-enabled towers.

The problem is that Verizon 5G Home is not really a 5G technology. Instead, Verizon admits that this configuration is a custom version of the next-generation network that was built to test its rollout of 5G in the future.

It is called “5G TF” which includes customizations and differences from the 3GPP standard known as 5G NR (new radio). As with most wireless (or technology in general) standards, 5G NR is the result of years of debate, discussion, and compromise between technology companies and service providers. But it is the standardization that allows consumers to be confident in device interoperability and long-term success of the initiative.

5G TF does operate in the millimeter wave part of the spectrum, 28 GHz to be exact. But 5G isn’t limited to mmWave implementations. And the Verizon implementation only includes the capability for 2×2 MIMO, less than the 4×4 support in 5G NR that will bring bandwidth and capacity increases to a massive number of devices on true 5G networks.

Upcoming 5G-enabled phones and laptops that integrate a 5G NR modem will not operate with the concoction Verizon has put together.

Verizon even admitted that all of the 5G TF hardware that the company is rolling out for infrastructure and end user devices will need to be replaced at some point in the future. It is incompatible with the true 5G NR standard and is not software upgradable either. From an investment standpoint you can’t help but wonder how much benefit Verizon could gain from this initiative; clearly this will be a financial loss for them.

But what does Verizon gain?

The truth is that Verizon is spouting these claims for the world’s first 5G network as way to attach itself to a leadership position in the wireless space. Marketing and advertising are eager to showcase how Verizon is besting the likes of AT&T, T-Mobile, and Sprint with a 5G cellular rollout in the US, but it’s just not accurate.

Take for example the AT&T “5G Evolution” that was actually a 4G LTE service with speeds up to 1.0 Gigabit. An amazing feat and a feature worth promoting, but the carrier decided instead to message that it was part of the 5G transition.

Both of these claims do a disservice to the true capability and benefits of 5G technology while attempting to deceive the us into believing each is the leader in the wireless space. As a result, consumers end up confused and aggravated, removing yet another layer of trust between the customer and service providers. For other companies that are taking care with the 5G story, whether it be competing ISPs or technology providers like Qualcomm, they suffer the same fate through no fault of their own.

These antics should come as little surprise to anyone that followed along with the move from 3G to 4G and to LTE. Most insiders in the industry hoped that we had collectively learned a lesson in that turmoil and that 3GPP might be able to help control these problematic messaging tactics. Instead we appear to be repeating history and it will be up to the media and an educated group of consumers to tell the correct story.

Making Sense of the GeForce RTX launch

This week marks the release of the new series of GeForce RTX graphics cards that bring the NVIDIA Turing architecture to gamers around the globe. I spent some time a few weeks back going over the technological innovations that the Turing GPU offered and how it might change the direction of gaming, and that is worth summarizing again.

At its heart, Turing and GeForce RTX include upgrades to the core functional units of the GPU. Based on a very similar structure to previous generations, Turing will improve performance in traditional and current gaming titles with core tweaks, memory adjustments, and more. Expect something on the order of 1.5x or so. We’ll have more details on that later in September.

The biggest news is the inclusion of dedicated processing units for ray tracing and artificial intelligence. Much like the Volta GPUs that are being utilized in the data center for deep learning applications, Turing includes Tensor Cores that accelerate matrix math functions necessary for deep learning models. New RT cores, a first for NVIDIA in any market, are responsible for improving performance of traversing ray structures to allow real-time ray tracing an order of magnitude faster than current cards.

Reviews of the new GeForce RTX 2080 and GeForce RTX 2080 Ti hit yesterday and the excitement about them is a bit more tepid than we might have expected after a two-year hiatus from flagship gaming card launches. I’d encourage you to check out the write ups from PC Perspective, Gamers Nexus, and Digital Foundry.

The RTX 2080 is indeed in a tough spot, with performance matching that of a GTX 1080 Ti but with a higher price tag. NVIDIA leaned heavily into the benefit of Turing over Pascal in regards to HDR performance in games (using those data points in its own external comparison graphics), but the number of consumers that have or will adopt HDR displays in the next 12 months is low.

The RTX 2080 Ti is clearly the new leader in graphics and gaming performance but it comes with a rising price tag as well. At $1199 for the NVIDIA-built Founders Edition of the card (third party vendors will be selling their own designs still), the RTX 2080 Ti now sells for the same amount as the Titan Xp and $400 more than the GTX 1080 Ti launched at. The cost of high-end gaming is going up, that much is clear.

I do believe that the promise of RTX-features like ray tracing and DLSS (deep learning super sampling) will be a shift in the gaming market. Developers and creative designers have been asking for ray tracing for decades and I have little doubt that they are eager to implement it. And AI is taking over anything and everything in the technology field and gaming will be no different: DLSS is just the first instance of AI-integration for games. It is likely we will find use for AI in games for rendering, animation, non-player character interactions, and more.

But whether or not that “future of gaming” pans out in the next 12-18 months is a question I can’t really answer. NVIDIA is saying, and hoping, that it will, as it gives the GPU giant a huge uplift in performance on RTX-series cards and a competitive advantage over anything in the Radeon line from AMD. But even with a substantial “upcoming support” games list that includes popular titles like Battlefield V, Shadow of the Tomb Raider, and Final Fantasy XV, those of us on the outside looking in can’t be sure and are being asked to bet with our wallets. NVIDIA will need to do more, and push its partners to do more, to prove to us that the RTX 20-series will see a benefit from this new technology sooner rather than later.

When will AMD and Radeon step up to put pressure and add balance back into the market? Early 2019 may be our best bet but the roadmaps from the graphics division there have been sparse since the departure of Raja Koduri. We know AMD is planning to release a 7nm Vega derivative for the AI and enterprise compute markets later this year, but nothing has been solidified for the gaming segment just yet.

In truth, this launch is a result of years of investment in new graphics technologies from NVIDIA. Not just in new features and capabilities but in leadership performance. The GeForce line has been dominating the high-end of the gaming market for at least a couple generations and the price changes you see here are possible due to that competitive landscape. NVIDIA CAN charge more because its cards are BETTER. How much better and how much that’s worth is a debate the community will have for a long time. Much as the consumer market feigns concern over rising ASPs on smartphones like the Apple iPhone and Samsung Galaxy yet still continues to buy in record numbers, NVIDIA is betting that the same is true for flagship-level PC gaming enthusiasts.

Why Cheating on Smartphone Benchmarks Matters to You

Earlier this month a story posted on popular tech review site Anandtech discovered some interesting data when looking at the performance of flagship Huawei smartphones. As it turns out, benchmark scores in some popular graphics tests, including UL Benchmark’s 3DMark and long-time mobile graphics test GFXBench, were being artificially inflated to gain an advantage over competing phones and application processors.

These weren’t small changes. Performance in a particular subset of the GFXBench test (T-Rex offscreen) jumped from 66.54 FPS to 127.36 FPS, an improvement of more than 2x. The lower score is what the testing showed when “benchmark detection mode” was turned off – in other words, when the operating system and device was under the assumption that this was a normal game. The higher score is generated when the operating system (customized by Huawei) is able to detect a popular benchmark application and jump up power consumption on the chip outside levels that would actually be integrated in a phone. This is done so that reviews that utilize these common tests paint the Huawei devices in a more favorable light.

The team behind the Geekbench benchmark found similar results, and I posted about them on ShroutResearch.com recently. Those results showed multi-core performance deltas as high 31% in favor of the “cheating” mode.

While higher scores are better, of course, there are significant problems with the actions Huawei undertook to mislead the editorial audience and consumers.

First and maybe most importantly for Huawei going forward, is that this testing and revelation paints the newly announced Kirin 980 SoC (developed in-house by HiSilicon) in a totally different light. While the launch press conference looked to show a new mobile chipset that could run screaming past Qualcomm’s Snapdragon 845 platform, we now look at the presented benchmarks from Huawei as dubious at best. Will the Kirin 980 actually live up to the claims that the company put forward?

The most obvious group affected by Huawei’s decision to misrepresent current shipping devices is the consumer. For buyers of flagship devices that often depend on reviews, and the benchmarks that lead up to an author making a recommendation, to aid in the buying process. And customers that are particularly interested in the gaming and high-end application performance of their smartphones would pay even more direct attention to benchmark results, some of which are falsely presented.

Other players in the smartphone market that are not taking part in the act of cheating on benchmarks also suffer due to Huawei’s actions, which is obviously the point. Competing handset vendors like Samsung, Oppo, Vivo, perhaps even Apple, are handicapped by the performance claims Huawei has made, showing the competing devices in an artificially negative light. In the Chinese market where benchmarks and performance marketing are even more important than in the US, Huawei’s attempt to stem the tide of competition has the most affect.

To a lesser degree, this hurts Qualcomm and Samsung’s Exynos products too, making their application processor solutions look like they are falling behind when in fact they may actually be the leaders. Most of the high-end smartphones in China and the rest of the world are built around the Snapdragon line and pressure from its own customers after seeing Huawei supposedly taking performance leadership was growing.

This impacts the software developers of tools like 3DMark, Geekbench, and GFXBench as well. To some on the outside this will invalidate the work and taint the impact of other, non-cheating results in these tests. Consumers will start to fear that other scores are artificially inflated and not a representation of the performance they should expect to see in their devices. Other silicon and device vendors might back out of support for the tools, reducing the development resources for these companies to improve and innovate on benchmark methodology.

Huawei’s answer of “it’s just some AI” that is purposefully resulting in the shifting benchmark scores has the potential to cause damage to the entire AI movement. If consumers begin to associate AI-enabled devices and software as misrepresenting their work, or that everything that integrates AI is actually a scam, we could roll back the significant momentum the market has built and risk cutting it off completely.

Measuring performance on smartphones is already a complicated and tenuous task. The benchmarks we have today are imperfect and arguably need to undergo some changes to more accurately represent the real-world experiences that consumers get with different devices and more capable processors. But undergoing acts like cheating makes it harder for the community at large to work together and address the performance questions as a whole.

Do we need better mobile device testing for performance and features and cameras and experiences? Yes. But cheating isn’t the way to change things and, when caught, can do significant damage to a company’s reputation.

Despite rumors, 7nm is Not Slowing Down for Qualcomm

Earlier this week, a story ran on Digitimes that indicated there might be some problems and slowdown with the rollout of 7nm chip technologies for Qualcomm and MediaTek. Digitimes is a Taiwan-based media outlet that has been tracking the chip supply chain for decades but is known to have a rocky reliability record when it comes to some if its stories and sources.

The author asserts that “Instead of developing the industry’s first 7nm SoC chip” that both of the fabless semiconductor companies mentioned have “moved to enhance their upper mid-range offerings by rolling out respective new 14/12nm solutions.”

But Qualcomm has already built its first 7nm SoC and we are likely to see it this year at its annual Snapdragon Tech Summit being held in Maui, Hawaii this December. The company has already sent out “save the date” invites to media and analysts and last year’s event was where it launched the Snapdragon 845, so it makes sense it would continue that cadence.

If that isn’t enough to satisfy doubters, Qualcomm went as far as to publish a press release that the “upcoming flagship” mobile processor would be built on 7nm and that it had begun sampling this chip to multiple OEMs building the next generation of mobile devices. The press release quotes QTI President Cristiano Amon as saying “smartphones using our next-generation mobile platform [will launch] in the first half of 2019.”

Digitimes’ claims that both Qualcomm and MediaTek have “postponed” launches from 2018 to 2019 is counter to all the information we have received over the previous six months. As far as we can tell, the development of the next Snapdragon product and TSMC’s 7nm node is on track and operating as expected.

12nm/14nm refinements are coming

The assertion that Qualcomm is enhancing upper- and mid-range platforms around the existing 14nm and 12nm process nodes is likely true. It is common for the leading-edge foundry technologies to be limited to the high performance and/or high efficiency products that both require the added capability and can provide higher margins to absorb the added cost of the newer, more expensive foundry lines.

There could be truth to the idea of chip companies like Qualcomm putting more weight behind these upper-mid grade SoCs due to the alignment with the 5G roll out across various regions of the globe. But this doesn’t indicate that development has slowed in any way for the flagship platforms.

7nm important for pushing boundaries

Despite these questions and stories, the reality is that 7nm process is indeed necessary for the advancement of the technology that will push consumer and commercial products to new highs as we move into the next decade. Building the upcoming Snapdragon platform on 7nm means Qualcomm can provide a smaller, denser die to its customers while also targeting higher clock speeds and additional compute nodes. This means more cores, new AI processing engines, better graphics, and integrated wireless connectivity faster than nearly any wired connections.

This does not benefit only Qualcomm though; there is a reason Apple’s upcoming A12 processor is using 7nm for performance and cost efficiency advantages. AMD is driving full speed into 7nm to help give it the edge over Intel in the desktop, notebook, and enterprise CPU space for the first time in more than a decade. AMD will even have a 7nm enterprise graphics chip sampling this year!

Those that don’t clearly see the advantage 7nm will give to TSMC’s customers aren’t witnessing the struggles that Intel has with its product roadmap. Without an on-schedule 10nm node it is being forced to readjust launches and product portfolios to a degree I have never seen. The world’s largest silicon provider will survive the hurdle but to assume that its competitors aren’t driving home their advantage with early integration of 7nm designs would be naive.

Lenovo Yoga C630 WOS Laptop First with Snapdragon 850

More than a full year into the Windows on Snapdragon product life, the jury is still out on how well received the first generation of notebooks were. Qualcomm launched machines with three critical OEM partners: HP, ASUS, and Lenovo. All three systems offered a different spin on a Windows laptop powered by a mobile-first processor. HP had a sleek and sexy detachable, the ASUS design was a convertible with the most “standard” notebook capabilities, and the Lenovo Miix design was a detachable with function over form.

Reviews indicated that while the community loved the extremely long battery life that the Snapdragon platform provided, the performance and compatibility concerns were significant enough to sway decisions. Prices were a bit steep, if only when compared in a raw performance angle against Intel-based solutions.

Maybe the best, but least understood, advantage of the Snapdragon-based Windows notebooks was the always connected capability provided by the integrated Gigabit LTE modem. It took only a few trips away from the office for me to grasp the convenience and power of not having to worry about connectivity or hunting for a location with open Wi-Fi service in order to send some emails or submit a news story. Using your notebook like your smartphone might not be immediately intuitive, but now that I have tasted that reality, I need it back.

As a part of a long-term strategy to take market share in the Windows notebook market, Qualcomm announced the Snapdragon 850 processor in June during Computex in Taipei. A slightly faster version of the Snapdragon 845 utilized in today’s top-level Android smartphones, the SD 850 is supposed to be 30% faster than the SD 835 (powering the first generation of Always On, Always Connected PCs) while delivering 20% better battery life and 20% higher peak LTE speeds.

Those are significant claims for just a single generational jump. The 20% of added battery life alone is enough to raise eyebrows as the current crop of Snapdragon devices already provided the best battery life we have ever tested on a Windows notebook. The potential to get 30% better performance is critical as well considering the complaints about system performance and user experience that the first generation received. We don’t yet know where that 30% will manifest: single threaded capability or multi-threaded workloads only. It will be important to determine that as the first devices make their way to market.

Which leads us to today’s announcement about the Lenovo Yoga C630 WOS, the first notebook to ship with the Snapdragon 850 processor. The design of the machine is superb and comes in at just 2.6 pounds. It will come with either 4GB or 8GB of LPDDR4X memory and 128GB or 256GB UFS 2.1 storage, depending on your configuration. The display is 13.3 inches with a resolution of 1920×1080 and will have excellent color and viewing angles with IPS technology. It has two USB Type-C ports (supporting power, USB 3.0, and DisplayPort) along with an audio jack and fingerprint sensor.

When Lenovo claims the Yoga C630 WOS will have all-day battery life, they mean it. Lenovo rates it at 25 hours which is well beyond anything similarly sized notebooks with Intel processors have been able to claim. Obviously, we will wait for a test unit before handing out the trophies, but nothing I have read or heard leads me to believe this machine won’t be the leader in the clubhouse when it comes to battery life this fall.

Maybe more important for Qualcomm (and Arm) with this release is how Lenovo is positioning the device. No longer subjugated to the lower tier brand in the notebook family, the Snapdragon 850 iteration is part of the flagship consumer Yoga brand. The design is sleek and is inline with high-end offerings that are built around Intel processors. All signs indicate that Lenovo is taking the platform more seriously for this launch and the mentality should continue with future generations of Snapdragon processors.

I don’t want to make more of this announcement and product launch without information from other OEMs and their plans for new Snapdragon-based systems, but the initial outlook is that momentum is continuing build in favor of the Windows-on-Arm initiative. The start was rocky, but in reality, we expected that to be the case after getting hands on with the earliest units last year. Qualcomm was at risk that partners would back away from the projects because of it or that Intel might put pressure (marketing or product-based) on them to revert.

For now, that doesn’t appear to be the case. I am eager to see how the Lenovo Yoga C630 WOS can close the gap for Windows-on-Snapdragon and continue this transformative move to a more mobile, more connected computing ecosystem.

Nvidia Turing brings higher performance, pricing

During the international games industry show, Gamescom, in Cologne, Germany this week, NVIDIA CEO Jensen Huang took the covers off the company’s newest GPU architecture aimed at enthusiast PC gamers. Codenamed Turing and taking the brand of GeForce RTX, the shift represents quite a bit more than just an upgrade in performance or better power efficiency. This generation NVIDIA is attempting to change the story with fundamentally changed rendering techniques, capabilities, and yes, prices.

At its heart, Turing and GeForce RTX include upgrades to the core functional units of the GPU. Based on a very similar structure to previous generations, Turing will improve performance in traditional and current gaming titles with core tweaks, memory adjustments, and more. Expect something on the order of 1.5x or so. We’ll have more details on that later in September.

The biggest news is the inclusion of dedicated processing units for ray tracing and artificial intelligence. Much like the Volta GPUs that are being utilized in the data center for deep learning applications, Turning includes Tensor Cores that accelerate matrix math functions necessary for deep learning models. New RT cores, a first for NVIDIA in any market, are responsible for improving performance of traversing ray structures to allow real-time ray tracing an order of magnitude faster than current cards.

Both of these new features will require developer integration to really take advantage of them, but NVIDIA has momentum building with key games and applications already on the docket. Both Battlefield V and Shadow of the Tomb Raider were demoed during Jensen’s Gamescom keynote. Ray tracing augments standard rasterization rendering in both games to create amazing new levels of detail in reflections, shadows, and lighting.

AI integration, for now, is limited to a new feature called DLSS that uses AI inference locally on the GeForce RTX Tensor Cores to improve image quality of the game in real-time. This capability is trained by NVIDIA (on its deep learning super computers) using the best quality reference images from the game itself, a service provided by NVIDIA to its game partners that directly benefits the gamer.

There are significant opportunities for AI integration in gaming that could be addressed by NVIDIA or other technology companies. Obvious examples would include compute-controlled character action and decision making, material creation, and even animation generation. We are in the nascent stages of how AI will improve nearly every aspect of computing, and gaming is no different.

Pricing for the new GeForce RTX cards definitely raised some eyebrows in the community. NVIDIA is launching this new family at a higher starting price point than the GTX 10-series launched just over two years ago. The flagship model (RTX 2080 Ti) will start at $999 while the lowest priced model announced this week (RTX 2070) comes in at $499. This represents an increase of $400 at the high-end of the space and $200 at the bottom.

From its view, NVIDIA believes the combination of performance and new features that RTX offers gamers in the space is worth the price being asked. As the leader in the PC gaming and graphics space, the company has a pedigree that is unmatched by primary competitor AMD, and thus far, NVIDIA’s pricing strategy has worked for them.

In the end, the market will determine if NVIDIA is correct. Though there are always initial complaints from consumers when the latest iteration of their favorite technology is released with a higher price tag that last year’s model, the truth will be seen in the sales. Are the cards selling out? Is there inventory holding on physical and virtual shelves? It will take some months for this settle out as the initial wave of buyers and excitement comes down from its peak.

NVIDIA is taking a page from Apple in this play. Apple has bucked the trend that says every new chip or device released needs to be cheaper than the model that preceded it, instead increasing prices on last year’s iPhone X and finding that the ASP (average sales price) jumped by $124 in its most recent quarter. NVIDIA sees its products in the same light: providing the best features with the best performance, and thus, worthy of the elevated price.

The new GeForce RTX family of graphics cards is going to be a big moment for the world of PC gaming and likely other segments of the market. If NVIDIA is successful with its feature integration, partnerships, and consumer acceptance, it sets the stage for others to come into the market with a similar mindset on pricing. The technology itself is impressive in person and proves the company’s leadership in graphics technology, despite the extreme attention that it gets for AI and data center products. Adoption, sales, and excitement in the coming weeks will start to tell us if NVIDIA is able to pull it all off.

New Threadripper Puts AMD in Driver Seat for Workstations

AMD started off a race of CPU core count when it released the first-generation Ryzen processor back in 2017, pushing out a product with 8 cores and 16 threads, doubling that of the equivalent platform from Intel. It followed that same year with Ryzen Threadripper, an aggressive name for an aggressive product for the high-end enthusiast market and the growing pro-sumer space that combines users looking to do both work and play on personal machines. Threadripper went up to 16 cores and 32 threads, going well above the 10-core designs that Intel offered in the same market space.

AMD was able to do this quickly and cost effectively by double dipping on the development cost of the EPYC server processor. It shared the same socket and processor package design with only a handful of modest modifications to make it usable by end-users and partners. It was putting the pressure on Intel once again, this time in a market that Intel was previously the dominant leader in AND that it had created to begin with. Thus continued the “year of AMD.”

Intel did respond, offering a revision to the Core X-series of processors that reached up to 18 cores and 36 threads, one-upping the AMD hardware in core count and performance. But it did so at a much higher cost; it seemed that Intel was not willing to under cut its own Xeon workstation line in order to return the pressure on AMD. But the battle had started: the war of processor performance and core count had begun.

This month, just a year after the release of the first Threadripper processor, AMD is launching the 2nd generation Threadripper. It utilizes the updated 12nm “Zen+” core design with better clock scaling capability, improved thermal and boost technologies, and lower memory latencies. This is the same core found in the Ryzen 2000-series of processors, but with two or four dies at work rather than a single.

But this time, AMD has divided Threadripper into two sub-categories, the X-series and the WX-series. The X-series peaks with the 2950X and targets the same users and workloads as the first-generation platform including enthusiasts, pro-sumer grade content creators, and even gamers. The core counts reach 16, again the same as the previous generation, but the addition of the “Zen+” design makes this noticeably faster in nearly every facet, with a lower starting price point.

The WX line is more unique. It is going directly after workstation users, as the “W” would imply, with as many as 32 cores and 64 threads on a single processor. Applications that can really utilize that much parallel horsepower are limited to extremely high-end content creation tools, CAD design, CPU-based rendering, and heavy multi-tasking. The WX-series is basically an EPYC processor with half the memory channels and consumer-class motherboards.

Performance on the 2990WX flagship part is getting a lot of attention; mostly positive but with some questions. It obviously cuts through any multi-threaded applications that properly utilize and propagate workloads but it also does well in single threaded tasks thanks to AMD’s Precision Boost 2 capability. There are some instances where applications, even those that had traditionally been known as multi-threaded tests, demonstrate performance hits.

In software where threads may bounce around from core to core, and from NUMA node to NUMA node, results are sometimes lower on the 2990WX than the 2950X even though the WX model has twice the available processing cores. Gaming is one such example – it isn’t heavy enough on the processor to saturate the cores and thus threads move between the four die and two memory controllers occasionally causing a perf hit. AMD has a software-enabled “game mode” for the 2990WX (and the 2950X) to disable one-half or three-quarters of the cores on the part, which alleviates the performance penalty, but adds an extra step of hassle to the process.

Despite the imperfection, the second-generation Threadripper processor has put Intel in a real bind.

If Intel executives were angry last year when the first Threadripper parts were released, taking away the performance crown from Intel even if for a modest amount of time, they are going to be exceptionally mad this time around. AMD now offers content creators and OEMs a 32-core processor in a platform that Intel only provides an 18-core solution and in applications where the horsepower is utilized AMD has a 60%+ performance advantage.

Intel is probably planning a release of its Xeon Scalable-class parts for this same market with a peak 28-core solution to address Threadripper, but this means another expensive branding exercise, new motherboards, a new socket, and more hassle. Intel demonstrated a 28-core processor on stage at Computex but received tremendous blowback for running in an overclocked state and apparently forgoing that information during the showcase.

While there might be a legitimate argument to be made about the usefulness of this many processor cores for a wide range of consumers, there is no doubt that AMD is pushing the market and the technology landscape forward with both this and the previous generation Threadripper launches. Intel is being forced to respond, sometimes quickly and without a lot of tact, but in the end, it means more options and more performance at a lower price than was previously available in the high-end computing space.

It’s good to have competition back once again.

Tesla Should Reconsider Building Silicon for Autonomous Driving

No stranger to creating evocative statements that generate headlines, Tesla founder and CEO Elon Musk said during the company’s quarterly earnings call that the future of its autonomous driving systems for Tesla vehicles would utilize in-house designed computing systems. This is the not the first time Elon has said the company was working on chips for AI processing but it does mark the first time more specific statements on capability have been made.

But maybe the most important question is one that went unasked from the financial analysts on the call: is this even something Tesla SHOULD be pursuing?

Let’s be very clear up front: building a processor is hard. Building one that can compete with the giants of the tech market like NVIDIA, Intel, AMD, and Qualcomm is even more difficult.

A trend of custom silicon

There has been a trend in the implementer space (companies that traditionally take computing solutions from vendors and implement them into their products) to create custom silicon. Apple is by far the most successful company to do this in recent history. It moved from buying all of the parts that make up the iPhone and iPad to designing almost every piece of computing silicon including the primary Arm processor, the graphics, and even the storage controller. (Interestingly the modem is still the one thing that eludes them, depending on Qualcomm and Intel for that.)

The other modern examples of this silicon transition are Google and Facebook. Google built the TPU for artificial intelligence processing and Facebook has publicly stated that it has research on-going for a similar project. Both of those companies are enormous with a substantial purse to back up the engineering feat that is creating a new product like this. Their financial future is not in doubt or dependent on the outcome of the AI chip process.

Tesla thrived on tech

Tesla is company that was born and lives off of the idea that it is more advanced than everyone else out there. I should know – I bought a Model S in 2015 with that exact mindset. Musk was brash, bucked the traditional automotive trends. He made promises like coast-to-coast autonomous drives by the end 2017 and AutoPilot seemed like magic when it was released.

Since then, we are more than half-way through 2018 without that autonomous drive taking place and AutoPilot has been surpassed by other driving assistance solutions from GM and others.

This might lead many to believe that Tesla NEEDS to develop its own AI hardware for autonomous driving in order to get back on track, no longer wanting to be beholden the companies that have provided previous generations of smart driving technology for its cars.

Mobileye was the first partner that Tesla brought on board, but the companies split because (as was widely rumored) Mobileye wasn’t comfortable with the expanding claims Tesla was making about its imaging and processing systems. NVIDIA hardware powered the infotainment and driving systems for some period of time and more recently Intel-based systems have found their way into the infotainment portion.

Clearly Tesla has experience working with the major players in the AI and automotive spaces.

Performance claims

On the call with analysts, Musk mentioned that these new processors Tesla was working on would have “10x” the performance of other chips. Obviously, no specifics were given but it seems reasonable he was talking about the NVIDIA platform in use on shipping cars today that is more than three years old, the Drive PX2. And even then, only half of the PX2 processing power was integrated on the vehicles.

Musk also brought up that the interconnect between the CPU and GPUs on current AI hardware systems was a “constraint” and a bottleneck on computational throughput.

These reasons for building custom hardware are mostly invalid as they are addressed by current and upcoming hardware from others including NVIDIA. The Drive Xavier system offers 10x the performance of PX2 and NVIDIA’s upcoming Drive Pegasus will be even faster. And these platforms integrate NVLink technology to provide a massive upgrade to the bandwidth between the CPU and GPU, addressing the second concern Musk voiced on the earnings call.

The Risks

Deciding to design and build your own AI silicon has a lot of risks that come along for the ride. First and likely most important is the issue of safety. If Google’s TPU AI system doesn’t work correctly then we get a mis-matched facial recognition result for an uploaded image. If a self-driving car system malfunctions then we have a more dangerous situation. After the Uber autonomous driving accident that killed a pedestrian early this year, the safety and reliability of self-driving vehicles is more prominent and top-of-mind than ever before. There are years of regulation and debate coming over who shares or holds liability for these types of occurrences but you can be damn sure that the car manufacturer is already on the top of that list.

If Tesla happens build the car, design the AI hardware, write the AI system level software, and sell it direct to the consumer, there are very few questions as to where the fingers will point.

Financial risks exist for building in-house silicon too. Tesla is a small company relative to Google and Facebook, and even smaller if we focus on the teams involved in software and hardware development outside of the vehicle-manufacturing systems. The cost to build custom chips is usually amortized over years and millions of units shipped, justifying the added price compared to using off-the-shelf components. Tesla has sold just north of 350,000 cars in the last 6+ years and even if we double that in the next six, we have only 700,000 chips that will be needed for these autonomous vehicles.

Companies like NVIDIA that have leadership positions in the AI landscape build processors and platforms for hundreds of different usage models, from AI training to inference, and from smart cameras to autonomous cars. It has the engineering talent in place and experience to build the best chips that combine performance, efficiency, and pricing viability.

Intel and AMD will also likely make more noise in these spaces soon. Intel just finished a data center announcement that included specific deep learning and AI accelerated instructions for its updated processor family coming later this year.

Would a chip that is custom built and tuned for Tesla specific AI models and sensor systems be more power and performance efficient than a more general solution from NVIDA or Intel? Yes. But does that make it worth the time, money, and risk to get it done when there are so many other problems that Tesla could be addressing? I just don’t see it.

Pressure on Intel and Apple as Qualcomm exits iPhone

Several months back during one of countless analyst calls that Qualcomm was hosting in the middle of the attempted hostile takeover from Broadcom, the company stated as part of a financial outlook that it was “zeroing out” incoming revenue (Qualcomm’s licensing division) from Apple, the company it was fighting in various legal arenas around royalties and IP. This was part of a multi-faceted move to appeal to investors, showing Qualcomm’s ability to remain profitable and growing without intervention from the takeover bid.

At the time most analysts believed that this “zeroing out” of Apple revenue was considered a worst-case scenario, giving the markets confidence in the prospects of Qualcomm’s licensing and chip groups even without one of its biggest customers.

On the conference call following the release of its most recent quarterly earnings, Qualcomm stated that would not be selling modems to Apple for its next-generation iPhones. This was confirmation that in fact, the “zeroing” of Apple income those months prior was less about presenting a hypothetical situation and instead a way to prepare the market and investors for a life without Apple at the purchasing table.

Qualcomm remains in a battle with Apple over royalties that the Cupertino-giant owes for currently shipping devices, and that will continue for the foreseeable future. And even though Apple is not buying modem hardware from Qualcomm, it (and in reality, its suppliers) will still owe QTL royalty fees for the technologies used in devices in the future. There is another story to be told about the current state of the Apple-Qualcomm legal dispute, but for now I’ll leave you with the knowledge that the “beginning of the end” will start later this fall as court decisions being to materialize.

A new paradigm for iPhone

Now that we know the upcoming iPhones will use Intel modems exclusively, moving from a carrier-by-carrier split between Intel and Qualcomm that existed for the last two generations, a new dynamic will be playing out in the smartphone market.

Flagship Android phones that utilize Qualcomm Snapdragon Mobile Platforms like the 845 will have one very specific, distinct, and important advantage that they could not claim previously. Wireless performance, both in straight line speed and at-the-edge reliability, is a feature that will become more important and prominent in 2019 and 2020.

Hardware vendors that compete with the iPhone should be leaning into this. We saw the first salvo come from Qualcomm directly by showcasing the independently gathered Ookla speed test results. In that data we found irrefutable evidence that the wireless capability of flagship Android phones using the Snapdragon 845 was dramatically more robust than iPhones using the Intel modem. It’s an interesting and expectedly aggressive first step in a marketing and messaging machine that is just at the beginning of its torque curve.

Why Gigabit and 5G matter

There are differing opinions on the value of the wireless system in place for smartphones and connected devices. Many have stated that Gigabit-class LTE speeds simply don’t matter for users or user experience. I disagree and believe that speed and latency of wireless connections will only increase in importance with advancement in streaming content, streaming apps, and even more upcoming wireless lifestyle changes.

That negative mentality also leans on top speed performance considerations solely, ignoring the arguably more important areas of edge-of-network performance and reliability. When the signal is at its weakest, when you are inside buildings or further away from a cell tower, a better cellular implementation means the difference between being able to stream video or not. In an extreme situation it could allow you to call for help in an emergency…or not.

The upcoming 5G networks will begin rolling out across the globe in 2019 and phone makers are already preparing devices to take advantage of them. Only Android-based phones that integrate Snapdragon modems will support 5G out of the gate. Intel is likely is a full year behind Qualcomm on having similar capabilities or form factors to support all bands of 5G currently planned. That means iPhones will be at least a full year behind the first wave of 5G flagship devices, and that could turn into two years if we look at Apple’s past history of cellular technology implementations. (Note that Huawei is also planning a 5G modem for its SoC.)

Every other smartphone vendor from Samsung to Oppo will have an advertising campaign essentially pre-built against Apple. In a time when consumers are more technology aware than ever before, the iPhone will be behind on cellular performance. As a result, these smartphone vendors need to hit back on Apple, and as a side effect Intel, with the goal of setting the standard higher, changing the story and value of wireless technology for consumers that utilize these devices every day.

You can be sure that carriers like Verizon and AT&T are going to spend money to promote the capabilities of their 5G networks after billions in infrastructure investment. And Apple will not be able to participate in any of that collateral, leaving windows of opportunity for everyone else in the fight.

If Google is paying attention, it will follow suit, creating a campaign talking up the benefits of an open ecosystem of partners that have the flexibility to create, build, and differentiate. Innovation advantages like this don’t happen often against a company with the resources of Apple, so all parties must participate.

Pressure on Intel

With the secret confirmed courtesy of Qualcomm, Intel is now under a significant amount of pressure. It will be the sole supplier of modem and cellular technology to one the world’s biggest companies and the largest mobile device supplier in any ecosystem or geography. Intel will need to step up its game for 4G LTE implementation to avoid more damaging testing and stories for the iPhone line while also accelerating advanced 5G integration to make sure Apple isn’t left behind in that important race.

Is Apple going to drop out of the smartphone race without 5G? Absolutely not. It’s install base, and fanatical community, will suffer through quite a lot to stay inside this ecosystem.

Apple has been able to convince consumers it offers the best of every facet of smartphone technology, from the camera to battery life, to accessories to performance. But with the entire wireless market and consumer base looking towards 5G on their smartphone, their car, their notebooks, factories, drones, and more, it cannot afford to take an extended backseat to competitors.

AMD sees strong growth in Q2, Ryzen Mobile doubles

This week AMD shared its Q2 2018 financial results and the numbers are impressively positive. Revenue peaked at $1.76B, a 54% increase over the same quarter last year, and margins increased by 3% year-on-year. It created $156M in profit, the highest number for AMD since 2011—a period in which AMD had leadership in desktop and server processor spaces.

The bulk of income came from the compute and graphics unit, covering consumer processors and graphics, at $1.1B, which is 64% higher than this time last year. AMD doesn’t breakdown the unit shipments to the granularity I would like but it did tell us that Ryzen unit shipments “grew by strong double-digit percentage” from the previous quarter, which translates into something between 10-15% growth.

Radeon consumer sales increased year-on-year but sequentially it fell back some due to the softening of the cryptocurrency market. While in Q1 the blockchain market represented around 10% of total revenue, that dropped to 6% based on AMD’s estimates and it expects it to fall again in Q3. Datacenter GPU sales grew “significantly” though no more detail that that was given. AMD will be able to grow unit percentages in this segment for a while due to NVIDIA basically owning the market, the question will be how much revenue it can actually generate.

With this growth and execution AMD is continuing to increase its R&D investment. For the first half of the year the R&D budget increased by 25% over this period in 2017, a testament to the company’s commitment to keep the ball rolling. AMD has a big step coming in 2019 with the move to 7nm process technology with its partners at TSMC and GlobalFoundries, and getting that right for both CPUs and GPUs simultaneous will require some effort and capital.

Maybe most impressive from the news this week was that Ryzen Mobile shipments have more than doubled from the Q1 2018. Again, AMD wouldn’t commit to more specific numbers, but mobile processor ASPs are up sequentially and year-on-year due to the Ryzen Mobile launch, as you would expect. It’s likely that the Ryzen Mobile ASP is significantly north of the previous A-series APUs and as the product division migrates from previous generation to those new Ryzen-based systems, the ASPs will spike even higher.

For the first time in AMD’s history, all three major commercial OEMs have announced Ryzen-based notebooks (and desktops): Dell, HP, and Lenovo. We have seen dozens of new laptops launch in Q2 from the power-3 but also Acer, ASUS, Huawei, and Samsung. With all of these coming into the market in the last three months, we are seeing just the beginning of what Ryzen Mobile will do in the rest of 2018.

Though it’s too early to get numbers based on these recent design wins for AMD, Mercury Research has tracked the growth of AMD’s market share in the notebook space for a long time. Q3 of 2017 showed 6.8% of the market was based on AMD platforms. That only increased to 6.9% in Q4 2017 but then jumped to 8.0% in Q1 of 2018. I believe we’ll see an even more significant jump once Q2 numbers are gathered and that the trend will continue through the end of the year.

If 2017 was the year of the return of AMD in the processor market, and 2018 was the showcase of the company’s ability to execute on its roadmap, 2019 will show the market that AMD will be not only competitive but setup as a potential leader in the processor space. The move to 7nm for both CPUs and GPUs, coupled with the headaches competitor Intel is having with its 10nm and beyond transition, will give AMD an advantage and capability it has never had: process leadership. Zen 2 and the Ryzen, Threadripper, EPYC, and Ryzen Mobile parts based on it could solidify the future of AMD well into the future.

New VR interface simplifies future designs

One of the biggest hassles of getting up and running with a high performance, desktop-class VR configuration is the setup. Despite the ease of use associated with phone-based designs that simply click into a headset and allow you to interact in an untethered fashion, PC-based systems require cables going between the user and the system. The virtual reality headsets also require the setup of cameras or tracking sensors in the room and then routing cables from the headset to the notebook or PC in a way that won’t entangle the user.

Ideally we would be moving to a wireless video and data transmission state for VR, but that is something that has and is being attempted by several in the industry. Intel, as well as partners of HTC and Oculus, having options, but nothing I would call perfected. It also doesn’t solve the issue of power delivery for situations where batteries aren’t ideal.

A new consortium representing all the major players in the PC VR space created a standard connection based around USB Type-C. On board are NVIDIA, AMD, Oculus, Valve, and Microsoft – essentially anyone that matters to this market. Called VirtualLink, the group claims that it was “developed to meet the connectivity requirements of current and next-generation virtual reality (VR) headsets.”

As an “Alternate Mode” for Type-C, VirtualLink can utilize a lot of the capability from the USB consortiums previous work while augmenting it for the VR space. The purpose of this connection is to create an environment for VR users that is simplified, condensing the cable management requirements and leaving headroom for future products and bandwidth needs.

A single VirtualLink connection will offer four lanes of DisplayPort HBR3 (high bitrate) which in total are capable of 32.4 Gbit/s of bandwidth, 25.92 Gbit/s of which is dedicated for video. That allows VR headsets to receive as high as 120Hz refresh rate at 4K resolution or stretch to 8K with a 60Hz. Though the group says that this interface is scalable for future designs, I don’t know if that means the DP integration will scale higher than what we see today or that VR devices will scale up into the 4K/8K resolutions.

VirtualLink includes a USB 3.1 data channel for sensor data transmissions to and from the headset. Any cameras or telemetry data that the headset is reading or generating can be passed back to the PC in a way that melds with the current software infrastructure, but in the simplicity of a single cable.

Power delivery is something that USB Type-C and Thunderbolt have added that greatly improves the usability of new interfaces. VirtualLink does this as well, pushing out as much as 27 watts to the HMD (head mounted display) to power the display, cameras, sensors, etc.

With current implementations from Oculus (the Rift) and HTC (Vive and Vive Pro) requiring 2-3 cables each, with a breakout box in the middle of the critical path, there is no doubt VirtualLink will make things easier for consumers and for vendors. VirtualLink cables will need to be hardwired into the headsets (or use a different, proprietary connector at the HMD) due to slight alterations to the USB Type-C standard (making existing Type-C cables unusable), dropping the chances of disconnection or failed setup and configuration.

No timeline is given for the integration of VirtualLink in PCs or HMDs, though there have been rumors circulating that NVIDIA was including a “VR port” on its upcoming GeForce family update coming late in the summer. It seems likely that this is what those rumors were referring to – now with the background info on VirtualLink we can connect the dots. We don’t know anything about AMD’s plans but its name as part of this consortium indicates that the next generation of Radeon GPUs will also integrate support for VirtualLink.

There is no roadmap for new PC-based VR headsets currently, with the Vive Pro being the most recent release, still using the standard integration methods from previous generations. I assume that the follow on to that, and the next Rift device from Oculus, will integrate VirtualLink to start the process of simplified VR. In the interim, the consortium details converter boxes from companies like Bizlink that can output full-size DisplayPort, USB2 and USB3, and power, from a single input VirtualLink connection.

One tidbit of information I am still trying to hunt down is how VirtualLink ports on next-generation graphics cards will process the USB 3.1 data channel and integrate the 27 watts of power distribution required. Pushing out 27 watts through VirtualLink is simple enough for high-performance graphics cards in the GTX 1080-class, though staying within the PCIe-standards for power consumption will be critical. Lower performance cards, that might not even require external power connections inside a PC, might struggle more.

Graphics cards would have to emulate a USB 3.1 hub, or pass a virtual USB channel back to the host PC for the software infrastructure to detect and utilize the sensor data provided to and from the HMD itself. I haven’t seen anything like this done before, but it shouldn’t be hard for NVIDIA and AMD to integrate with the consortium unifying development.

Those details aside, the creation of a new standard interface and connection for virtual reality on the PC will improve on a comically complex situation that exists today. Will that be enough to accelerate the VR space on the PC? No, but it does mean the next generation of options from Oculus, HTC, or any other newcomer that integrates VirtualLink will have a better chance of convincing consumers it’s worth trying.

Promise of Magic Leap AR now powered by NVIDIA

After four long, long years of development in which many in the outside world (myself included) doubted that the product would ever see the light of day, Magic Leap held an even this week to give some final details on its development kit. First, the Magic Leap One Creator Edition will be shipping “this summer” though nothing more specific was given. Pricing is still unknown, though hints at it being in the realm of a “high end smartphone” point to this as a ~$1,500 item.

For the uninitiated, Magic Leap is the company behind countless videos that appeared to show the most amazing augmented reality demonstrations you can imagine. The demos, some of which claimed to be captured rather the created, were so mind blowing that it was easy to dismiss them as fantasy. Through this series of live streams the Magic Leap team is attempting to demonstrate the capability of the hardware, leaving behind the negative attention.

Magic Leap showcased a demo called Dodge, in which the wearer combines the use of their hand as a pointing and action device with the real world. By looking around the room, a grid was projected on the floor, table, and couch, indicating it recognized the surfaces thanks to the depth camera integration. Using an ungloved hand to point and pinch (replicating a click action), the user is setting locations for a rock monster to that emerges from the surface in a fun animation. It then tosses stones your way, which you can block with your hand and push away, or move to the side, watching it floats harmlessly past – one time even hitting a wall behind you and breaking up.

The demo is a bit jittery and far from perfect, but it proves that the technology is real. And the magic of watching a stone thrown past your head and virtually breaking on a real, physical surface is…awesome.

The other new information released included the hardware powering the One. For the first time outside of the automotive industry, the Tegra X2 from NVIDIA makes its way into a device. The Magic Leap One requires a substantial amount of computing horsepower to both track the world around the user as well as generate realistic enough looking imagery to immerse them. The previous generation Tegra X1 is what powers the Nintendo SHIELD, and the X2 can offer as much as 50% better performance than that.

Inside the TX2 is an SoC with four Arm Cortex-A57 CPU cores and two more powerful NVIDIA-designed Denver2 ARMv8 cores. A Pascal-based GPU complex is included as well with 256 CUDA cores, a small step below a budget discrete graphics card for PCs like the GeForce GT 1030. This is an impressive amount of performance for a device meant to be worn, and with the belt-mounted design that Magic Leap has integrated, we can avoid the discomfort of the heat and battery on our foreheads.

The division of processing is interesting as well. Magic Leap has dedicated half of the CPU cores for developer access (2x Arm A57 and 1x NVIDIA Denver2) while the other half are utilized for system functionality. This helps handle the overhead of monitoring the world-facing sensors and feeding the graphics cards with the data it needs to crunch to generate the AR imagery. No mention of dividing the resources of the 256 Pascal cores was mentioned but there is a lot to go around. It’s a good idea on Magic Leap’s part to ensure that developers are forced to leave enough hardware headroom for system functionality, drastically reducing the chances of frame drops, stutter, etc.

The chip selection for Magic Leap is equally surprising, and not surprising. NVIDIA’s Tegra X2 is easily the most powerful mobile graphics system currently available, though I question the power consumption of the SoC and how that might affect battery life for a mobile device like this. Many had expected a Qualcomm Snapdragon part to be at the heart of the One, both because of the San Diego company’s emphasis on VR/AR and mobile compute, but also because Qualcomm had invested in the small tech firm. At least for now, the performance that NVIDIA can provide overrides all other advantages competing chips might have, and the green-team can chalk up yet another win for its AI/graphics/compute story line.

There is still much to learn about the Magic Leap One, including where and how it will be sold to consumers. This first hardware is targeting developers just as the first waves of VR headsets did from Oculus and HTC; a necessary move to have any hope of creating a software library worthy of the expensive purchase. AT&T announced that it would the “exclusive wireless distributor” for Magic Leap One devices in the US but that is a specific niche that won’t hit much of the total AR user base. As with other VR technologies, this is something that will likely need to be demoed to be believed, so stations at Best Buy and other brick and mortar stores are going to be required.

For now, consider me moved from the “it’s never going to happen” camp to the “I’ll believe it when I try it” one instead. That’s not a HUGE upgrade for Magic Leap, but perhaps the fears of vaporware can finally be abated.

Memory market and Micron continue upward trend

One of the darling tech stocks of the last year has been Micron, a relative unknown in the world of technology compared to names like Intel, NVIDIA, and Samsung. With a stock price driven by market demands, and increasing over 90% in the last calendar year, there are lots of questions about the strength of Micron in a field where competitors like Samsung, and even Intel, are much bigger names.

Last month Micron earnings were eye opening. For its fiscal Q3 it had a 40% increase in revenue over Q3 the previous year. Even more impressive was a doubling of profit in that same period. The quarterly results had $3.82B in net income on $7.8B in revenue, with a Q4 forecast of $8.0-8.4B.

NVIDIA, by contrast, had $3.2B in revenue last quarter. Yet the GPU giant is getting more attention and more analysis than a company more than twice its size.

As part of the earnings announcement, Micron CEO Sanjay Mehrotra expressed confidence in the continued demand for memory as well as the ability for Micron to maintain its profit margins with consistent pricing. This was directly addressing a key segment of the financial analyst group that continue to worry that memory demand will dry up and limit the growth potential for Micron. Micron is at a higher risk in that scenario because of its singular focus on memory technology while competitors like Samsung and Intel are diverse.

This Boise, Idaho based company has to answer the same question as the rest of the memory vendors in the tech field: will demand for memory abate with product shifts in the market or when the build capacity catches up?

There are several reasons why we could see demand for both DRAM (system memory) and NAND (long term storage) memory slow down. By many measures the smartphone market has peaked, with only developing nations like China and India still increasing unit sales, but with much lower cost devices. China sales of phones are in flux thanks to trade war and tariff concerns – Qualcomm and Micron are both US-based and are major providers of smartphone technology. The Chinese government is investigating into memory price fixing accusations against all major vendors and a poor outcome there could incur major penalties and unpredictable changes to the market.

But the Micron CEO doesn’t believe those factors will win out, and neither do I. For the foreseeable future, DRAM demands will continue to grow with mobile devices as we increase the amount of memory in each unit. The coming explosion of IoT products numbering in the billions will all require some type of system DRAM to run, giving Micron and others a significant opportunity to grow. And we cannot forget about the power of the data center and, in particular, the AI compute market. NVIDIA might be the name driving the AI space but every processor it builds will require memory, and massive amounts of it.

In the NAND market for SSDs, there is a lot of competition. But Micron benefits from the OEM arrangements as well as the push into more vertical integration, selling direct to consumers and enterprise customers. Micron has made a push to counter the DIY and OEM dominance of Samsung SSDs with its own Crucial and Micron-branded options, a move that is improving with each generational release.

As more customers migrate from spinning hard drives in their PCs and servers to larger capacity solid state drives that are faster and more reliable, there remains a sizeable opportunity for memory vendors.

If demand will continue to increase, capacity remains the next question. When AMD was building its Vega GPU and utilizing a new memory technology called HBM2, the product suffered because of availability. Though Micron was not playing in the HBM (high bandwidth memory) space, it is a recent example of how the memory market is trying to play catch up to the various demands of technology companies.

There are additional fab facilities being built, but if it seems like they aren’t bringing them up as fast as they could, you aren’t alone. New fabs will alleviate capacity concerns but it will decrease pricing and lower margins, something that any reasonable business will be concerned about in volatile markets.

Over the decades of memory production, the market was cyclical. As technologies move from generation to generation, the demand would plummet, followed by higher prices associated with the NEXT memory technology. As use of that memory peaked and fell, the cycle would restart anew. But because of the growth and demand for memory products of all kinds, and the segments of extreme growth like AI and IoT, it looks like this pattern will be stalled for some time.

Can Qualcomm Hit Intel with Rumored Snapdragon 1000 Chip?

Over the course of the last week or two, rumors have been consistently circulating that Qualcomm has plans for a bigger, faster processor for the Windows PC market coming next year. What is expected to be called “Snapdragon 1000” will not simply be an up-clocked smart phone processor and instead will utilize specific capabilities and design features for larger form factors that require higher performance.

The goal for Qualcomm is to close and eliminate the gap between its Windows processor options and Intel’s Core-series of CPUs. Today you can buy Windows 10 PCs powered by the Snapdragon 835 and the company has already announced the Snapdragon 850 Mobile Compute Platform for the next generation of systems. The SD 835-based solutions are capable, but consumers often claim it lacks some of the necessary “umph” for non-native Arm applications. The SD 850 will increase performance in the area of 30% on both processor and graphics, but it likely will still be at a disadvantage.

As a user of the HP Envy x2 and the ASUS NovaGo, both powered by the Snapdragon 835, I strongly believe they offer an experience that addresses the majority of consumers’ performance demands today, with key benefits of extraordinary battery life and always-on connectivity. But more performance and more application compatibility are what is needed to take this platform to the next level. It looks like the upcoming Snapdragon 1000 might do it.

The development systems that lead to this leak/rumor are running with 16GB of LDDR4 memory, 256GB of storage, a Gigabit-class (likely higher) LTE modem, and updated power management technology. It also is using a socketed chip, though I doubt that would make it to the final implementation of the platform as it would dramatically reduce the board size advantage Qualcomm currently has on Intel offerings.

Details on how many cores the Snapdragon 1000 might use and what combination of “big.LITTLE” they might integrate are still unknown. Early reports are discussing the much larger CPU package size on the development system and making assertions on the die size of a production SD 1000 part, but realistically anything in the prototyping stage is a bad indicator for that.

It does appear that Qualcomm will scale the TDP up from the ~6.5 watts of the SD 835/850 to 12 watts, more in-line with what Intel does with its U-series parts. This should give the Qualcomm chip the ability to hit higher clocks and integrate more cores or additional systems (graphics, AI). I do worry that going outside of the TDP range we are used to on Qualcomm mobile processors might lead to an efficiency drop, taking away the extended battery life advantage that its Windows 10 PCs offer over Intel today. Hopefully the Qualcomm product teams and engineers understand how pivotal that is for its platform’s success and maintain it.

Early money is on the SD 1000 being based on a customized version of the Arm Cortex-A76 core announced at the end of May. Arm made very bold claims to go along with this release, including “laptop-class performance” without losing the efficiency advantages that have given Arm its distinction throughout the mobile space. If Arm, and by extension Qualcomm, can develop a core that is in within 10% of the IPC (instructions per clock) of Skylake, and have the extreme die size advantages that we think they can achieve, the battle for the notebook space is going to be extremely interesting towards the middle of 2019.

Intel is not usually a company I would bet to be flat-footed and slow to respond in a battle like this. But the well documented troubles it finds itself in with the 10nm process technology transition, along with the execution from TSMC on its roadmap to 7nm and EUV, means that Qualcomm will have an opportunity. Qualcomm, and even AMD in the higher-end space, couldn’t have asked for a better combination of events to tackle Intel: a swapping of process technology leadership from Intel to external foundries, along with new CPU and core designs that are effective and efficient, mean we will have a battle in numerous chip markets that we have not had in a decade.

These are merely rumors, but matching up with the release of the Arm Cortex-A76 makes them more substantial – Qualcomm appears to be stepping up its game in the Windows PC space.

Can Dell capitalize on our VR/AR future?

The market for current generation VR technology is in an interesting place. Many in the field (including analysts like myself) looked at the state of VR in 2015/2016 and thought that the rise and advance of sales, adoption, software support, and vendor integration would be significantly higher than what we have actually witnessed. Though the HTC Vive and Oculus Rift on the PC, as well as Gear VR from Samsung and various VR platforms from Qualcomm do provide excellent experiences in price ranges from $200 to $2000, the curve of adoption just hasn’t been as steep as many had predicted.

That said, most that follow the innovation developments in VR and AR (augmented reality) clearly see that the technology still has an important future for consumer, commercial, and enterprise applications. Let’s be real: VR isn’t going away and we are not going to see a regression of the tech that plagued previous virtual reality market attempts. Growth might be slower, and AR could be the inflection point that truly drives adoption, but everyone should be prepared to consume content and interact through this medium.

There is no shortage of players in the VR/AR market, all attempting to leave their mark on the community. From hardware designs to software to distribution platforms and even tools development, there are a lot of avenues for companies looking to invest in VR to do so. But one company that potentially could have a more significant impact on VR, should it choose to make the investment of budget and time, is Dell. It may not be the obvious leader for a market space like this, but there is an opportunity for Dell to leverage its capabilities and experience to get in on the ground level of disruptive VR technology. There is more Dell can do that simply re-brand and resell what Microsoft has determined its direction is for VR.

Here are my reasons on why that is the case:

  • The combined market advantage: Dell has the ability to address both commercial and consumer VR usage scenarios through its expansive channel and support systems. In recent months I have seen the interest in commercial applications of VR speed up more than end-user applications as the market sees what VR and AR can do for workflows, medical applications, design, customer walk-throughs and more. Very few PC and hardware companies have the infrastructure in place that Dell does to be able to hit both sides.
  • Display expertise: Though a lot of different technology goes into VR and AR headsets (today and in the future), the most prominent feature is the display. Resolution, refresh rate, response time, color quality, and physical design are key parts of make a comfortable headset usable for long sessions. Dell is known for its industry-leading displays for PCs, and though the company isn’t manufacturing the panels, it has the team in place to procure and scrutinize display technology, ensuring that its products would have among the best experiences available for VR.
  • Hardware flexibility: Because Dell has the ability to offer silicon solutions from any provider, including Intel, AMD, Qualcomm, or anyone else that provides a new solution, it is not tied to any particular reference design or segment. While Intel has backed out of VR designs (at least for now) and Qualcomm is working with partners like Lenovo and Oculus as they modify mobile-based reference designs for untethered VR headsets, Dell would be able to offer a full portfolio of solutions. Need a tethered, ultra-performance solution for CAD development or gaming? Dell has the PCs to go along with it. Need a mobile headset for on-site validation or budget consumer models? It could provide its own solution built with Qualcomm Snapdragon silicon or innovate with a mobile configuration of AMD APUs.
  • The Alienware advantage: One of the most interesting and prominent uses for VR and AR is gaming, and Dell has one of the leading brands in PC gaming, Alienware. By utilizing the mindshare and partnerships that division lead Frank Azor has fostered since being acquired by Dell, the Alienware brand could position itself as the leader in virtual reality gaming.
  • Being private offers stability: Though there are rumors that Dell wants to transition back to a public company again, being privately owned gives leadership more flexibility to try new things and enter new markets without the shareholders breathing down your neck in a public fashion if you aren’t hitting your margin or revenue goals on a quarterly basis. Because VR and AR is a growing field, with a lot of questions circling around it, the need to “push for revenue” on day one can haunt a public company interested in VR. Dell would not have that pressure, giving its design and engineering teams time to develop and to be creative.
  • A strong design team: Despite being known as “just” a PC vendor to most of the market, Dell has a significant in-house design team that focuses on user interface, hardware design, and how technology interacts with people. I have seen much of this first hand in meetings with Dell showcasing its plans for the future of computing; it develops and creates much more than most would think. Applying this team to VR and AR designs, including hardware and software interaction, could create compelling designs.

There is no clear answer or path to the future of virtual or augmented reality. It is what makes the segment simultaneously so exciting and frightening for those of us watching it all unfold, and for the companies that invest in it. There are and will remain many players in the field, and everyone from Facebook to Qualcomm will have some say in what the future of interactive computing looks like. The question is, will Dell be a part of that story too?

AMD Could Grab 15% of the Server Market, says Intel

Before the launch of its Zen-architecture processors, AMD had fallen to basically zero percent market share in the server and data center space. At its peak, AMD held 25% of the market with the Opteron family, but limited improvement in performance and features slowly dragged the brand down and Intel took over the segment, providing valuable margin and revenue.

As I have written many times, the new EPYC family of chips has the capability to take back market share from Intel in the server space with its combination of performance and price-aggressive sales. AMD internally has been targeting a 5% share goal of this segment, worth at least $1B of the total $20B market size.

However, it appears that AMD might be underselling its own potential, and Intel’s CEO agrees.

In a new update from analyst firm Instinet, the group met and spoke directly with Intel CEO Brian Krzanich and found that Intel sees the future being brighter for AMD in the data center. Krzanich bluntly stated that Intel would lose server share to AMD in 2018, which is an easy statement to back up. Going from near-zero share to any measurable sales will mean fewer parts sold by Intel.

Clearly AMD is not holding back on marketing for EPYC.

In the discussion, Krzanich stated that “it was Intel’s job to not let AMD capture 15-20% market share.” If Intel is preparing for a market where AMD is able to jump to that level of sales and server deployment then the future for both companies could see drastic shifts. If AMD is able to capture 15% of data center processor sales that would equate to $3B in revenue migrating from incumbent to the challenger. By no measurement is this merely a footnote.

For months I have been writing that AMD products and roadmaps, along with the impressive execution the teams have provided, would turn into long-term advantages for the company. AMD knows that it cannot compete in every portion of the data center market with the EPYC chip family as it exists today, but where it does offer performance advantages or equivalency, AMD was smart enough to be aggressive with pricing and marketing, essentially forcing major customers, from Microsoft to Tencent, to test and deploy hardware.

Apparently Intel feels similarly.

Other details in the commentary from Instinet shows the amount of strain Intel’s slowing production roadmap is causing product development. Intel recently announced during an earnings call that its 10nm process technology that would allow it to produce smaller, faster, more power efficient chips was delayed until 2019.

Krzanich claims that customers do not care about the specifics of how the chips are made, only that performance and features improve year to year. Intel plans updates to its current process technology for additional tweaking of designs, but the longer Intel takes to improve manufacturing by a significant amount, the more time rivals AMD and NVIDIA will be able to utilize third party advantages to improve market positions.

Intel and AMD both dive into many-core CPU race

It seems not long ago that 2- and 4-core processors were at a seemingly unmovable status in the consumer CPU market. Both Intel and AMD had become satisfied with four cores being the pinnacle of our computing environments, at least when it came to mainstream PCs. And in the notebook space, that line was weighted lower, with the majority of thin and light machines shipping from OEMs with dual-core configurations, leaving only the flagship gaming devices with H-series quad-core options.

Intel first launched 6-core processors in its HEDT (high end desktop) line back in 2010, when it came up with the idea to migrate its Xeon workstation product to a high-end, high-margin enthusiast market. But core count increases were slow to be adopted, both due to software limitations and because the competition from AMD was minimal, at best.

But when AMD launched Ryzen last year, it started a war that continues to this day. By releasing an 8-core, 16-thread processor at mainstream prices, well under where Intel had placed its HEDT line, AMD was able to accomplish something that we had predicted would start years earlier: a core count race.

Obviously AMD didn’t create an 8-core and price it aggressively against Intel’s options out of the goodness of its heart. AMD knew that it would fall behind the Intel CPU lineup when it came to many single threaded, single core tasks like gaming and productivity. To differentiate and to be able to claim performance benefits in other, more content creation heavy tasks, AMD was willing to spend additional silicon. It provided an 8-core design priced against Intel’s 4-core CPUs.

The response from Intel was slower than many would have liked, but respond it did. It launched 6-core mainstream Coffee Lake processors that closed the gap but required new motherboards and appeared to put Intel out of its expected cadence of release schedules.

Then AMD brought out Threadripper, a competitor that it had never had previously to go against the Intel X-series platforms. It doubled core count to 16 with 32-threads available! As a result, Intel moved forward its schedule for Sky Lake-X and released parts up to 18-cores, though at very high prices by comparison.

Internally, Intel executives were livid that AMD had beat them to the punch and had been able to quickly release a 16-core offering to steal mindshare in a market that it had created and lead throughout its existence.

And thus, the current many-core CPU races began.

At Computex this week, both Intel and AMD are beating this drum. The many-core race is showing all its glory, and all of its problems.

Intel’s press conference was first and it had heard rumblings that AMD might be planning a reveal of its 2nd generation Threadripper processors with higher core counts. So it devised an impressive demonstration of a 28-core processor running at an unheard of 5 GHz on all cores – it’s hard to understate how impressive that amount of performance is. It produced a benchmark score in a common rendering test that was 2.2x faster than anything we had seen previously in a single socket, stock configuration.

This demo used a previously unutilized socket on a consumer platform, LGA3647, built for the current generation of Xeon Scalable processor. This chip also is a single, monolithic die, which does present some architectural benefits over AMD multi-chip designs if you can get past the manufacturing difficulties.

However, there has been a lot of fallout from this demo. Rather than anything resembling a standard consumer cooling configuration, Intel used a water chiller running at 1 HP (horsepower), utilizing A/C refrigerant and insulated tubing to get the CPU down to 4 degrees Celsius. This was nothing like a consumer product demo, and was more of a technology and capability demo. We will not see a product at these performance levels available to buy this year, and that knowledge has put some media, initially impressed by the demo, in a foul mood.

The AMD press conference was quite different. AMD SVP Jim Anderson showed a 32-core Threadripper processor using the same socket as the previous generation solutions. AMD is doubling the core count for its high-end consumer product line again in just a single year. This brings Threadripper up to the same core and thread count as its EPYC server CPU family.

AMD’s demo didn’t focus on specific performance numbers though it did compare a 24-core version of Threadripper to an 18-core version of Intel’s currently shipping HEDT family. AMD went out of its way to mention that both the 24-core and 32-core demos were running on air-cooled systems, not requiring any exotic cooling solutions.

It is likely AMD was planning to show specific benchmark numbers at its event, but because Intel had gone the “insane” route and put forward some unfathomably impressive scores, AMD decided to back off. Even though media and analysts that pay attention to the circumstances around these demos would understand the inaccuracy of comparison, it would have happened, and AMD would have lost.

As it stands, AMD was showing us what we will have access to later in Q3 of 2018 while Intel was showing us something we may never get to utilize.

The takeaway from both events and product demos is that the many-core future is here, even if the competitors took very different approaches to showcase it.

There are legitimate questions to the usefulness of this many-core race, as the software that can utilize this many threads on a PC is expanding slowly, but creating powerful hardware that offers flexibility to the developer is always a positive move. We can’t build the future if we don’t have the hardware to do it.

New XR1 chip from Qualcomm starts path to dedicated VR/AR processing

During the Augmented World Expo in Santa Clara this week, Qualcomm announced a new, dedicated platform for future XR (extended reality) devices, the Snapdragon XR1. Targeting future standalone AR and VR headset and glasses designs, the XR1 marks the beginning of the company’s dedicated platforms aimed squarely at the segment.

The Snapdragon XR1 will be the first of a family of XR dedicated chips and platforms to enable a wide range of performance and pricing classes across an array of partner devices. Qualcomm is only releasing information on the XR1 today, but we can assume that future iterations will be incoming to create a tiered collection of processors to target mainstream to flagship level hardware.

Though Qualcomm today uses the existing Snapdragon mobile platforms to build its AR/VR reference designs, the expected growth of this field into a 186-million-unit base by 2023 is pushing the company to be more direct and more specialized in its product development.

The Snapdragon XR1 will address what Qualcomm calls the “high quality” implementations of VR. Placed well above the cheap or free “cardboard” integrations of smartphone-enabled designs, the XR1 exists to create solutions that are at or above the current level of standalone VR hardware powered by current Qualcomm chips. Today’s landscape of devices features products like the Oculus Go powered by the Snapdragon 821 mobile platform and the Lenovo Mirage Solo using the Snapdragon 835. Both are excellent examples of VR implementations, but the company sees a long-term benefit from removing the association of “mobile processor” from its flagship offerings for XR.

Instead, the value of creating a customized, high-value brand for Qualcomm to target dedicated VR headsets gives the company flexibility in pricing and feature set, without pigeon-holing the product team into predefined directions.

The specifics on the new Snapdragon XR1 are a bit of a mystery, but it includes most of the components we are used to seeing in mobile designs. That means a Kryo CPU, an Adreno GPU, a Hexagon DSP, audio processing, security features, image signal processor, and Wi-Fi. Missing from the mix is an LTE-capable modem, something that seems at least slightly counter to the company’s overall message of always connected devices.

Detailed performance metrics are missing for now, as Qualcomm allows its partners to design products around the XR1. With varying thermal constraints and battery life requirements, I think we’ll see some design-to-design differences between hardware. At that point we will need Qualcomm to divulge more information about the inner workings of the Snapdragon XR1.

I expect we’ll find the XR1 to perform around the level of the Snapdragon 835 SoC. This is interesting as the company has already announced the Snapdragon 845 as part of its latest AR/VR standalone reference headset design. The XR1 is targeting mainstream pricing in the world of VR, think the $200-$400 range, leaving the Snapdragon 845 as the current leader. If and when we see an XR2 model announced, I expect it will exceed the performance of the SD 845 and the family of XR chips will expand accordingly.

The Qualcomm Snapdragon XR1 does have some impressive capabilities of its own, though keep in mind that all of this is encompassed in the 845 designs as well. Features like 4K HDR display output, 3D spatial audio with aptX and Aqstic capability, voice UI and noise filtering, and even AI processing courtesy of the CPU/GPU/DSP combo, all feature prominently in the XR1. The chip will support both 3DoF (degree of freedom) movement and controllers as well as 6DoF, though the wider range of available movement will be associated with higher tier, higher priced devices.

The first crop of announced customers includes HTC Vive and several others. Oculus isn’t on the list, but I think that’s because the Oculus Go was just released and utilizes a lower level of processor technology than XR1 will provide. Qualcomm has solidified its leadership position in the world of standalone AR/VR headsets, with almost no direct competition on the horizon. As the market opportunity expands, so will the potential for Qualcomm’s growth in it, but also the likelihood we will see other companies dip their toes into the mix.

Dell Cinema Proves PC Innovation Still Vital

The future of the consumer PC needs to evolve around more than just hardware specifications. What Intel or AMD processor powers the system, and whether an NVIDIA or Radeon graphics chip is included are important details to know, but OEMs like Dell, HP, Lenovo, and Acer need to focus more on the end-point consumer experiences that define the products they ultimately hand to customers. Discrete features and capabilities are good check-box items, but in a world of homogenous design and platform selection, finding a way to differentiate your solution in a worthwhile manner is paramount.

This is a trend I have witnessed at nearly every stage of this product vertical. In the DIY PC space motherboard companies faltered when key performance features were moved from their control (like the memory controller) and on to the processor die itself. This meant that the differences between motherboards was nearly zero in terms of what a user actually felt, and in even what raw benchmarks would show. The market has evolved to a features-first mentality, focusing on visual aesthetics and add-ons like RGB lighting as much, or more, than performance and overclocking.

For the PC space, the likes of Dell, HP, and Lenovo have been fighting this battle for a number of years. There are only so many Intel processors to pick from (though things are a bit more interesting with AMD back in the running) and only so many storage solutions, etc. When all of your competition has access to the same parts list, you must innovate in ways outside what many consider to be in the wheelhouse of PCs.

Dell has picked a path for its consumer product line that attempts to focus consumers on sets of technology that can improve their video watching experience. Called “Dell Cinema”, this initiative is a company-wide direction, crossing different product lines and segments. Dell builds nearly every type of PC you can imagine, many of which can and do benefit from the tech and marketing push of Dell Cinema, including notebooks, desktop PCs, all-in-ones, and even displays.

Dell Cinema is flexible as well, meaning it can be integrated with future product releases or even expanded into the commercial space, if warranted.

The idea behind this campaign, both of marketing and technology innovation, is to build better visuals, audio, and streaming hardware that benefits the consumer as they watch TV, movies, or other streaming content. Considering the incredible market and time spent on PCs and mobile devices for streaming media, Dell’s aggressive push and emphasis here is well placed.

Dell Cinema consists of three categories: CinemaColor, CinemaSound, and CinemaStream. None of these have specific definitions today. With the range of hardware the technology is being implemented on, from displays to desktops to high-end XPS notebooks, there will be a variety of implementations and quality levels.

CinemaColor focuses on the display for both notebooks and discrete monitors. Here, Dell wants to enhance color quality, provide monitors with better contrast ratios, with brighter whites and darker black tones. Though Dell won’t be able to build HDR-level displays into each product, the goal is to create screens that are “optimized for HDR content.” Some of the key tenets of the CinemaColor design are 4K screen support, thinner bezels (like the company’s Infinity Edge design), and support for Windows HD Color options.

For audio lovers, Dell CinemaSound improves audio on embedded speakers (notebooks, displays) as well as connected speakers through digital and analog outputs, including headphones. Through a combination of hardware selection and in-house software, Dell says it can provide users with a more dynamic sound stage with clearer highs, enhanced bass, and higher volume levels without distortion. The audio processing comes from Dell’s MaxxAudio Pro software suite that allows for equalization control, targeting entertainment experiences like movies and TV.

The most technically interesting might be CinemaStream. Using specific hardware selected with this in mind, along with a software and driver suite tuned for streaming, this technology optimizes the resources on the PC to deliver a consistently smooth streaming video experience. Intelligent software identifies when you are streaming video content and then prioritizes that on the network stack as well as in hardware utilization. This equates to less buffering and stuttering in content and potentially better resolution playback with more available bandwidth to the streaming application.

These three features combine for Dell Cinema. Though other OEMs and even component vendors have programs in place to tout capabilities like this for displays or sound, only Dell has succeeded in creating a singular, and easily communicated brand around them.

Though it was seemingly impossible in its first roll out, I would like to see Dell better communicate details of the technology behind Dell Cinema. Or better yet, let’s place specific lines in the sand for them: CinemaColor means any display (notebook or monitor) will have a certain color depth rating or above, or a certain HDR capability or above, etc. Same for audio and streaming – Dell should look to make these initiatives more concrete. This would provide more confidence to consumers that no longer have to worry about nebulous language.

While great in both theory and in first-wave implementations, something like Dell Cinema can’t be a one-trick-pony for the company. Dell needs to show iteration and plans for future improvement for it, not only for Dell Cinema to be taken seriously, but for any other similar program that might develop in the future. Trust is a critical component of marketing and technology initiatives like this one, and consumers are quick to shun any product or company that abandons them.

HP Isn’t Standing Still as Top Market share PC OEM

If you ask in the tech industry, you’ll hear stories about PC vendors and technology companies that lose an edge after becoming a market leader. Competition sharpens minds and accelerates research and design initiatives to gain that one foothold over the other guy that puts you firmly in the leading position. But too often that slips away as stagnation and complacency rolls in.

With Q1 results from IDC available, HP continues to maintain market share leadership in the PC space, pulling in 20.9%, holding above Dell and Lenovo. Recent announcements from the company also indicate that the company is attempting to avoid any stalling of growth by continuing to innovate and push forward with new designs and product programs.

The new philosophy at HP focuses on the “one-life” design ideal, where commercial and consumer users share hardware between personal and business use. For a younger generation that blurs the lines between work-time and play-time, having devices that can fill both roles and permit a seamless transition between those states is deal.

Just this month, the company announced updates to its premium lines of notebooks and desktops in an attempt to showcase its ability to provide products fitting of both roles.

Perhaps the most interesting is the new HP Envy Curve, an all-in-one PC that is more than your typical tabletop design. Internals of the system are impressive, but don’t do the rest of the system justice. Yes, including a powerful 8th-gen Intel CPU, SSD storage, and 16GB of memory are a requirement, but it’s the smaller touches that make the Envy Curve stand out.

In the base of the unit HP has embedded four custom Bang & Olufsen speakers angled upward at 45 degrees to better direct audio to the consumer. All-in-ones have traditionally included the most basic of speaker implementations, but HP is hoping it can add value to the Curve and provide high quality sound without the need for external hardware.

The curved 27-in or 34-in QHD display rests on a thin neck and is coupled with a wood grain finish on the back, giving the PC a distinct aesthetic that few others in the computing market offer. If computing is in threat of being commoditized, then customization and style will be key drivers of upgrades and purchases.

Two other innovations help the Envy Curve stand out. The base includes an embedded wireless Qi charging ring, meaning you can charge your phone without the annoyance of cables or USB ports, maintaining a clean footprint. HP has also integrated Amazon Alexa support, giving PC users access to the most popular digital assistant in the home and an alternative to Cortana on Windows 10. It all adds up to a unique product for a shifting desktop landscape.

Though the Envy Curve is part of it, the Envy family is more well known for its notebook line. It rests between the company’s budget-minded Pavilion and the ultra-high-end Spectre options. Attempting to further drive the premium notebook space, where HP gained 3.2 share points just last quarter, the Envy lineup will be seeing a host of new features and options courtesy of the innovation started with Spectre.

These design changes include thinner bezels around the screen, shrinking them to get nearer to the edge to edge designs that have overtaken smartphones. HP was quick to point out that despite this move, it kept the user-facing camera at the top of the device, even though it means slightly wider bezels on that portion, to prevent the awkward angles of other laptops.

Sure View is a technology that makes displays unreadable to anyone off angle, preserving privacy of data and content and is a nice addition stemming from the company’s business line. It can be enabled with the touch of a button and doesn’t require the consumer to semi-permanently enable it with a stick-on film.

Both the 13-in and 17-in Envy models will be using 1080p displays but have a unique lift hinge that moves the keyboard to an angle more comfortable for typing. HP was able to make the device slim and attractive but still maintain connectivity options users demand by implementing a jawed USB Type-A port.

The convertible Envy x360 13-in and 15-in improvements are similar, and both now offer AMD Ryzen APU processor options, giving consumers a lower cost solution that provides a very different performance profile to the Intel processors.

HP Elitebooks, known for their enterprise capabilities, got some updates this month as well. The new Elitebook 1050 G1 is the first 15-in screen in the segment and includes pro-level features like Sure Click, Sure Start, and Sure View all aiming at keeping commercial hardware secure and reliable. The Elitebook x360 1030 shrinks the device footprint by 10%, squeezes a 13-in screen in a form factor typical of 12-in models, and has a direct sunlight capable display that reaches brightness levels as high as 700 nits, perfect for contractors and sales teams that need to work outdoors.

To be fair and balanced, nothing that was announced is on the scale of revolutionary shifts but attempting to do that in the mature PC space is nearly impossible to pull off. Design shifts like thinner bezels, smaller footprints, brighter screens, and even Amazon Alexa integration do show that there is room left in the tank to tweak and perfect design. HP is using its engineering and product teams to do just that, while trying to maintain the market share position it has earned over Dell and Lenovo.

For those in the space that thought the PC was dead and innovation was over, HP has a few devices it would like to show you.

Google creates some spin with TPU 3.0 announcement

During the opening keynote to Google I/O yesterday the company announced a new version of its Tensor Processing Unit, TPU 3.0. Though details were incredibly light, CEO Sundar Pichai claimed that TPU 3.0 would have “8x the performance” of the previous generation and that it was going to require liquid cooling to get to those performance levels. Immediately much of the technical media incorrectly asserted an 8x architectural jump without thinking through the implications or how Google might have come to those numbers.

For those that might not be up on the development, Google announced the TPU back in 2016 as an ASIC specifically targeting AI acceleration. Expectedly, this drew a lot of attention from all corners of the field as it marked not only one of the first custom AI accelerator designs, but it was also from one of the biggest names computing. The Tensor Processing Unit targets TensorFlow, a library set for machine learning and deep neural networks developed by Google. Unlike other AI training hardware, that does limit the use case for TPU to customers of Google Cloud products and only TensorFlow based applications.

They are proprietary chips and are not available for external purchase. Just a few months ago, it leaked from the New York Times that Google would begin offering access to TPUs through Google Cloud services. But Google has no shortage of use cases for internal AI processing that TPUs can address from Google Photos to Assistant to Maps.

The liquid cooling setup for TPU 3.0

Looking back to the TPU 3.0 announcement yesterday, there are some interesting caveats about the claims and statements Google made. First, the crowd cheered when it heard this setup was going to require liquid cooling. In reality, this means that there has been a dramatic reduction in efficiency with the third-generation chip OR they are being packed much more tightly in these servers without room for traditional cooling.

Efficiency drops could mean that Google is pushing the clock speed up on the silicon, ahead of the optimal efficiency curve to get that extra frequency. This is a common tactic in ASIC designs to stretch out performance of existing manufacturing processes or close the gap with competing hardware solutions.

Liquid cooling in enterprise environments isn’t unheard of, but it is less reliable and costly to integrate.

The extremely exciting performance claims should be tempered somewhat as well. Though the 8x improvement and statement of 100 PetaFLOPS of performance are impressive, it doesn’t tell us the whole story. Google was quoting numbers from a “pod”, the term the company uses for a combination of TPU chips and supporting hardware that consume considerable physical space.

A single Google TPU 3.0 Pod

TPU 2.0 pods combined 256 chips but for TPU 3.0 it appears Google is collecting 512 into a single unit. Besides the physical size increases that go along with that, this means relative performance for each chip of TPU 3.0 versus TPU 2.0 is about 2x. That’s a sizeable jump, but not unexpected in the ever-changing world of AI algorithms and custom acceleration. There is likely some combination of clock speed and architectural improvement that equate to this doubling of per-chip performance, though with that liquid cooling requirement I lean more towards clock speed jumps.

Google has not yet shared architectural information about TPU 3.0 and how it has changed from the previous generation. Availability for TPU 3.0 unknown but even Cloud TPU (using TPU 2.0) isn’t targeted until the end of 2018.

Google’s development in AI acceleration is certainly interesting and will continue to push the industry forward in key ways. You can see that exemplified with NVIDIA’s integration of TensorCores in its Volta GPU architecture last year. But before the market gets up in arms thinking Google is now leading the hardware race, its important to put yesterday’s announcement in the right context.

Apple building a VR headset is good news for Facebook, Qualcomm

Though there have been rumblings of its existence for years, a more substantial report on Apple’s development of a virtual reality product was released last week. The story indicates that Apple is targeting a 2020 release for headset and that it will use in-house developed chip, screen, and wireless technologies. True or not, the anticipation of the Cupertino-giants’ entry into the VR market will spark development acceleration by competitors and drive interest from consumers into the current state of technology.

With an expected 22 million VR headsets to sell in 2018, and that increasing to 120 million units by 2022 according to CCS Insight, there is a $10B market up for grabs.

Despite some of the language in the CNet story, what Apple is aiming to do isn’t a revolutionary step ahead of the current offerings and roadmaps that exist from other VR technology providers. Apple has a reputation, however, for waiting for the “right time” to introduce new product lines and will likely put its specific touch of refinement and focus on a segment that is viewed as moving in too many directions.

The source for this Apple information, which appears to be strong, believes that Apple is building a wireless configuration for a combined VR/AR headset. Virtual reality and augmented reality are related, but differ in that AR overlays information on the real world while VR completely blocks out your surroundings.

Unlike currently shipping detached VR headsets like the Oculus Go, Apple appears to be utilizing an external box that will provide the majority of the computation necessary. In theory, this allows the company to provide better visuals, longer battery life, and lower costs than if it had decided to go with a fully integrated solution.

Apple will have hurdles to cross. Wireless data transfer from an external unit to the headset isn’t as easy as it sounds, as the amount of bandwidth and low latency required puts specific restraints on the technology it can use. Taking advantage of 60GHz WiGig or millimeter-wave frequencies (the same used for some versions of 5G cellular) means that objects like glass and even the human body can impact performance dramatically. Hiccups in the data stream from the external box to the VR headset will result in nausea and general discomfort.

The Oculus Go from Facebook

One drawback to using an external box for processing is that it limits the portability of the device. The new Oculus Go can travel with the consumer from work, to home, to the plane. A wirelessly “tethered” system is definitely an upgrade over the wired systems that exist on PCs today, but aren’t changing the usage model. Apple could decide to go both ways – allowing the headset itself to be used for lower computational tasks like video playback and basic games, but require the external box for more intense applications and productivity.

Leaked specs from the CNet story included the use of 8K displays for EACH EYE of the headset. This would be a tough undertaking for a few reasons. First, 8K is incredibly nascent, only showing up at CES this year as a technology demo. In 2020, these panels will still be prohibitively expensive. There are hints that Apple will be building its own displays in the near future, and this could be the first target rather than the next generation of iPhone.

I would expect the VR/AR product to use Apple in-house developed silicon. The company has clearly shown that it prefers to develop towards and prioritize specific architectural directions that it has on its roadmap. Knowing that Apple also plans to replace Intel processors in its notebooks in the same time frame, there will be overlap in the chips being used. The source story believe that Apple is waiting for 5nm process technology for this jump, which is at least two process generations ahead. Getting to mass production for chips of that size by 2020 is another hurdle.

Of course, Apple is definitely not the first player in the field. Other major players have been working and developing virtual reality hardware, algorithms, and software systems, creating the capability for software to evolve. Despite the attention that Apple is getting (and will get as more rumors persist), there is a lot of gain for the first adopters.

Oculus and HTC brought the first mainstream VR headsets to the world but they required powerful PCs and were hardwired to the system. Snap-in designs like the Samsung GearVR and Google Daydream allow users to double-up the use of a smartphone with head-mounted units. Though the headset itself is reasonably priced, the phones that can provide the power for them are often $600 of higher, and limit the battery life and capabilities.

Qualcomm started developing chips and reference designs for standalone VR headsets back in 2016, and they utilize a lot of the same technology found in modern, flagship smartphones. Just this week, Oculus, now owned by Facebook, launched the Oculus Go, the first mainstream, high quality sub-$200 device for VR that does not require a PC. Early reviews have been very positive, and having used one myself for a few days, I support the idea that this is the way VR should be utilized going forward.

The current players in the VR market will see an uplift of interest thanks to the strong rumor of Apple getting into the fold within two years. To some degree, it validates the VR/AR markets. Even though there is a sizeable audience for virtual reality products today, its growth has been much slower than expected. Solid information that Apple will be entering the field will force these companies to increase investment, hoping to solidify a leadership position before Apple gets involved. It will also drive consumers to take notice of VR and AR, moving a sub-set of the audience to buy-in early once interest is peaked.

TSMC and 7nm will revitalize major chip players

Though the emphasis on chip technology discussion often centers around the likes of Intel, AMD, NVIDIA, and Qualcomm, one of the biggest players is contract foundry TSMC. Taiwan Semiconductor Manufacturing Company represents more than 50% of the fabless semiconductor production in the world, building for those same companies listed above, including Intel for select projects like modems.

There is competition in the field, mostly stemming from the likes of Samsung and GlobalFoundries, in terms of leading-edge technology capability. Samsung has targeted TSMC’s market share as an area for its own growth with dramatic investments in R&D and production capability. GlobalFoundries is smaller, but spunky, pushing ahead with new tech like EUV, hoping to become a “fast follower” to the Taiwanese giants. Even Intel has talked about opening its fabs to external production, but the impact there has be minimal thus far.

Despite the pressure from other companies, TSMC continues to be the leader in both revenue and, debatably, roadmaps. Last week during an analyst call the company announced it had started high volume production of 7nm FinFET silicon, with 18 different products taped out from its customers. A tape-out is the term for the final chip validation that enables volume production to begin. TSMC expects to have 50 total 7nm tape-outs by the end of 2018.

7nm is of particular interest to the semiconductor industry as it will bring wide adoption from a host of different applications. For years, 16nm FinFET technology has been the stalwart of TSMC’s portfolio and is where the bulk of high performance chips like graphics and CPUs have remained. This is despite the fact that 10nm process technology has existed since late 2016. 10nm has only been utilized by a few key partners including Qualcomm and Apple, targeting more on power efficiency than raw performance products. Chips that demanded higher performance capability (frequency) stuck with the 16nm node.

But with 7nm that changes. This is where we will find NVIDIA’s next generation graphics chips for gaming, machine learning, and AI. AMD is going to be building its upcoming graphics family with TSMC 7nm (while the next-gen CPU products will stay with GlobalFoundries). TSMC mentioned other 7nm customers that include cryptocurrency ASIC designers, neural processing engines, and even mobile processors from Qualcomm and Apple will find their way to it.

Performance claims for TSMC 7nm FinFET technology are impressive. The company stated on its analyst call that moving from the current 16nm node to 7nm will result in a 70% die shrink, saving customers dramatically on the area per chip for each wafer. As cost is based on a per-wafer model, this is a big advantage for vendors like Qualcomm that are building small chips and allows someone like NVIDIA to design a more powerful GPU in the same area. 7nm will also provide either a 60% power consumption drop at the same frequency levels or a 30% improvement in frequency at the same power level, allowing engineers to decide between these traits on a per-use basis.

In short, this means longer battery life, more processor capability, and finally some observable performance improvements coming to consumer products in 2019.

For TSMC itself, the potential timing and technology lead it appears to have with 7nm FinFET comes at a perfect time. With the market for new smartphone chips softening based on many reports, the demand for the 10nm process will be following the same trend. With 7nm, TSMC will be able to balance and manipulate customer demand between mobile vendors, graphics vendors, AI vendors, etc. As the shift in silicon demand moves, TSMC will potentially have the answer for all of them.

The first generation of TSMC’s 7nm process technology is using existing manufacturing hardware, though it does require additional production steps called patterning. So, while this lowers the barrier of entry and Capex requirements for TSMC, it does mean that each wafer takes more physical time in machines, increasing demand and decreasing throughput (without production capability investments). This means there will be a cost increase, though TSMC and its partners haven’t talked specifics. For flagship products like GPUs and cryptocurrency miners this won’t be a problem as the cost can be absorbed or prices can be increased to compensate. For less expensive processors like budget-market cell phone chips, it could be concerning.

The net result of this 2018-2019 ramping of 7nm process technology is that hardware is about to get interesting once again. The stagnant areas of graphics and high performance mobile devices will likely see big changes in 2019 as utilization of TSMC (and Samsung and GlobalFoundries) new 7nm tech ramps. It also means that nascent areas of AI processing engines, machine learning chips, even crypto/blockchain processors will have the room to grow and expand in capability in a way we have yet to witness.

 

New AMD Ryzen Chips Put Pressure on Intel, Again

Today marks an important day for AMD. With the launch of the Ryzen 2000-series of processors for consumer DIY enthusiasts, gamers, and OEM partners, AMD is showing that not only did it get back into the race with Intel, but that it also is confident enough in its capability and roadmap to start on the journey of an annual cadence of releases.

The Ryzen 2000-series is not the revolutionary step forward that we saw with the first release of Ryzen. Before last year, AMD was seemingly miles behind the technology that Intel provided to the gaming markets, and the sales results showcased that. Not since the release of Athlon had AMD proved it could be competitive with the blue-chip-giant that built dominating technology under the Core-family brands.

While the first release of Ryzen saw insane IPC improvements (instructions per clock, one of the key measurements of peak CPU performance) that were 50% over the previous architectural design, Ryzen 2000 offers a more modest 3-4% uplift in IPC. That’s obviously not going to light the world on fire, but is comparable to the generation-to-generation jumps we have seen from Intel over the last several years.

AMD does have us looking forward to the “Zen 2” designs that will ship (presumably) in this period next year. With it comes a much more heavily revised design that could close remaining gaps with Intel’s consumer CPU division.

The Ryzen 2000-series of parts do have some interesting changes that stand out from the first release. These are built on a more advanced 12nm process technology from GlobalFoundries, down from the 14nm tech used on the 1000-series. This allows the processors to hit higher frequencies (+300 MHz) without drastic jumps in power consumption. Fabs like GF are proving that they can keep up with Intel in the manufacturing field, and that gives AMD more capability than we might have previously predicted.

AMD tweaked the memory and cache systems considerably in this chip revision, with claims of dropping the latencies of cache 13-34% depending on the level. Even primary DRAM latency drops by 11% based on the company’s measurements. Latency was a sticking point for AMD’s first Ryzen release as its unique architecture design meant that some segment of cores could only talk to the other segment over an inter-chip bus called Infinity Fabric. This slowed data transfer and communication between those cores and it impacted specific workloads, like lower resolution gaming. Improvements in cache latency should alleviate this to some degree.

The company took lessons learned in the first generation with a feature called Precision Boost and improved it with the 2000-series. Meant to give additional clock speed to cores when the workload is only utilizing a subset of available resources, the first iteration used a very rigid design, improving performance for few scenarios. The new implementation creates a gradual curve of clock speed headroom and core utilization, meaning that more applications that don’t fully utilize the CPU will be able to run at higher clocks based on the available thermal and electrical capabilities of the chip.

There are other changes with this launch as well that give it an edge over the previous release. AMD is including a high quality CPU air cooler in the box with all of its retail processors, something that Intel hasn’t done in a few generations. This saves consumers money and lessens the chances of not having a compatible cooler when getting all the new hardware home. StoreMI is a unique storage solution that uses tier-caching to combine the performance of an SSD with the capacity of hard drive, essentially getting the best from both worlds. It supports a much larger SSD for caching than Intel’s consumer offerings and claims to be high-performance and low-effort to setup and operate.

AMD saw significant success with the Ryzen processor launch last year and was able to grab a sizeable jump in global market share because of it. In some retailers and online sales outlets in 2017 AMD had as much as 50% share for PC builders, gamers, and enthusiasts. AMD will need many consecutive iterations of successful product launches to put a long-term dent in the Intel lead in the space, but the Ryzen 2000-series shows that AMD is capable of keeping pace.