Not long after Apple introduced the Newton in 1991, their first personal digital assistant (PDA), it became pretty clear that this product’s life would be a short one. While the concept of the Newton got real attention, its design and functions were weak and in the end did not work as stated by Apple. Its most significant problem was its handwriting recognition technology that was deeply flawed. It did not work for a lot of reasons, the key one being the mobile processors available at that time were incapable of handling this task with any level of accuracy or precision. And the software Apple used for this was inferior in its execution.
The latest reports for the graphics industry were released this week and paint an interesting picture on the market. Not only was there an annual increase in GPU shipments nearing 10%, but the big winner looks to be AMD, with a noticeable swing in market share away from NVIDIA. Obviously this comes at a controversial time in the discrete graphics market with cryptocurrency mining affecting both availability and pricing of hardware, but the numbers are telling of a market in flux.
As PC gamers and cryptocurrency miners continue to battle for the same pool of graphics hardware, both NVIDIA and AMD benefit from the competition in the market place. Sales are clearly higher than would exist if only PC gamers were the target customer, and because of that all the add-in card partners have no problem selling whatever the GPU vendors can supply them. But how did that play out in terms of market share shift in 2017?
Based on reports from Jon Peddie Research, some major swings have occurred. Looking at quarter-to-quarter movement from Q3 to Q4 2017, AMD sees an increase from 27.2% to 33.7% in add-in card shipments. That’s a jump of 6.5% in a single quarter; an impressive change. As there are only two competitors in the discrete space, that means NVIDIA saw a 6.5% drop in the same span of time, dropping from 72.8% to 66.3% share.
Annually, comparing 2017 to 2016, AMD sees an increase from 29.5% to 33.7% market share, a change of 4.2%. NVIDIA sees the inverse with a jump down -4.2%.
Clearly, NVIDIA is still the leader in graphics card sales globally. Though the green giant dropped from 72% to 66%, it maintains a 33-point advantage over the Radeon brand. NVIDIA’s GeForce products continue to provide better power efficiency and arguably better adjacent technologies (drivers, software tools, display tech, etc.) and I believe that gamers prefer it by default (justified or not).
It is also worth noting that NVIDIA tends to have better margins and higher ASPs on average than AMD in the consumer graphics space. So while unit share might have tilted in favor of AMD by 6.5% this past quarter, it is reasonable to assume that revenue share has moved less than that.
JPR showed a seasonal drop from Q3 to Q4 in total graphics shipments of -4.6%. This is close to the expected result from more than a decade of market tracking, though 2016 was an outlier with a Q4 increase. Returning to seasonality trends may indicate the GPU market is becoming more stable, and the battle between gamers and miners might be waning. However, this could also be a result of gamers backing away from the discrete GPU space and deciding to wait for the rumored launches of updated NVIDIA products this spring.
Another interesting note from the JPR data is that the graphics market saw a 9.7% year-on-year increase in shipments, likely indicative of the impact mining-specific sales have had overall.
Peddie claims that add-in card vendors sold more than 3 million units to cryptocurrency miners directly, worth more than $776M in revenue. That averages out to more than $258 per card, much higher than GPUs over the last decade. Increasing the ASP (average selling prices) is great for both AMD and NVIDIA, as it brings up margins, as evident from recent earnings releases by both companies.
With more than 52 million add-in cards sold in total for 2017, cryptocurrency specific sales represent about 6% of the total market. This is slightly lower than my expectations but still is a significant driver.
It would appear that AMD is indeed the primary beneficiary of the mining market influx based on the market share increases in this most recent quarter. It is unlikely that a significant amount of gamers decided to buy AMD graphics hardware (when it was even available) over NVIDIA in that window, so it seems like a safe assumption that majority of that increase is courtesy of cryptocurrency sales.
This shift in the market also has impacted the distribution of graphics card sales as well, moving unit sales from lower-priced units to higher-priced. JPR defines a “mainstream” graphics card as one sold for less than $150, a “midrange” graphics card as one sold from $150-$249, and a “high-end” graphics card as anything sold over $250. Looking at quarter-to-quarter changes, the high-end market moves from 11.5% to 16.0% and the midrange bumped from 41.5% to 51.7% of total shipments. As a result, the mainstream segment dropped from 39.2% down to 26.1%.
Again, this moves ASPs to areas that benefit both AMD and NVIDIA, improving margins.
The jump that AMD has made based on these reports should not be dismissed and instead paints a very positive picture for the company in the immediate. For short-term profitability, AMD did show much better than expected results for itself in Q4 last year and I expect we will see at least some of that value appear in the Q1 2018 results.
What happens next in the world of graphics is going to be quite an interesting story. There is a ton of uncertainty as we move into the rumored release of another generation of NVIDIA GeForce graphics cards sometime this spring. (AMD doesn’t have anything on the docket for new consumer graphics products for the foreseeable future.) How will NVIDIA modify pricing with this new product family? Keeping in mind that the higher-than-MSRP prices that graphics cards sell at do not directly benefit the GPU provider (instead the add-in card partners or channel sales take that money today), NVIDIA couldn’t be blamed for wanting a part of that additional revenue for itself.
Another theory has been circling that NVIDIA might attempt to find a way to lock-out cryptocurrency mining for a subset of graphics cards in order to alleviate PC gamers’ angst. While this could be technically possible, it’s difficult to thwart a community that is built on profitability. It would represent a tremendous good-will gesture from NVIDIA, but it doesn’t make a lot of fiscal sense for them in the short-term.
Putting that aside, if the next generation of GeForce products improve performance and power efficiency for cryptocurrency mining, and NVIDIA has done a decent job of collecting inventory, there is a reasonable chance that the market share changes we saw take place in Q4 2017 could reverse.
The cryptocurrency space remains volatile and highly unpredictable, and if the market shifts and we return to a world where gaming performance, features, and efficiency are restored as the most important reasons for graphics card purchases, NVIDIA should again have the edge. The gains AMD has seen in the latest JPR information look to be dependent on coin-mining, and without that to lean on, the Radeon products could return to a battle they are not as well equipped for.
Even if blockchain technology is here for good, and it appears that is the case, both companies are hesitant to publicly invest in the field. GPUs have been the dominant computing space for cryptocurrency since its inception, but the encroachment of ASICs and changing algorithms could lessen the value to both NVIDIA and AMD seemingly at any point.
There are two interesting narratives surrounding Apple and China. One around the launch of the iPhone X and the “supercycle” analysts hoped for which was going to require the Chinese market to participate. The other is around Apple having to transition to localized Chinese servers for iCloud.
Why Apple dominates smartphone revenues but may need some lower priced models to keep growing their services business.
Not long after Steve Jobs came back to Apple in 1997, I had a couple of talks with him that ranged from how he was going to save Apple to what some of his guiding principles would be when it came to bringing Apple back from the brink of bankruptcy. At the time he re-joined Apple, I had been spending time with current Apple CEO, Bill Amelio looking at ways he could keep Apple afloat and get it out of its downward spiral.
In my first meeting with Jobs, which took place the second day he was in the role of CEO or interim CEO as he liked to say, he told me that one of the guiding principles for making Apple relevant again was to focus on industrial design. At the time I questioned this focus, but as history has shown, industrial design has indeed played a key role in bringing Apple back from the brink of disaster.
A few weeks later, when I bumped into him at the Apple Campus, I asked a follow-up question to that first meeting and asked his view about margins. I had had this discussion with him before he left Apple in 1985 where his goal was to price any product Apple brought to market in the 22% and above range. Interestingly, that seemed low at the time as these were the early days of the PC and margins were closer to 35%-40% in those days. If I remember correctly, Mac’s margins were in these same ranges at that time too.
By the time Jobs returned to Apple in 1997, PC margins had shrunk to under 20%, and today, in some cases, those margins are closer to 5%. Apple’s margins on Mac’s, smartphones and most hardware products continue to be in the 35-37% range continuously.
I attribute this to two main factors-
1-Jobs’ initial goal of having margins of at least 22% and that being engrained into their leadership rules that guide Apple’s current management team. To their credit, Apple’s CEO and his team probably look at that 22% as lowest margin rate they would ever have on any product they ship.
2-Apple has always aimed to be the premium provider of any product they bring to market in any category. This is not news to anyone who follows Apple since this has been in their DNA since the Mac was introduced in 1984. With a premium focus on anything they make, premium pricing comes along with that and Apple makes no apologies for creating and delivering the best of breed on everything they ship. The new HomePod is an excellent example of this strategy.
Unlike other home speakers whose focus is on providing an assistant in low-end speaker designs, Apple went with a DNA based product in which the quality of the speaker was critical to their premium design thinking. Yes, they have a lot of work to do to get SIRI up to par with some of the voice assistants from others, but the AI engine is getting smarter, and with the dedicated specialty microphones tied to SIRI, Apple’s HomePod will get better over time.
The chart below emphasizes how Apple’s focus on premium and commitment to healthy margins impact their market position regarding smartphone revenue. Apple had the lions share of smartphone revenues in the last reported quarter. Samsung was #2 with 15.7 % of all smartphone revenue in the same quarter. Even the other category, which represents hundreds of millions of smartphones but with much lower ASP’s, were only 26.3% of this category.
Apple’s total revenue growth is also impressive. Apple’s Q1 (Sept to Dec 2017) broke records again in almost all categories. This reflects Apples focus on premium products with higher margins.
While I don’t believe Apple will deviate much from their premium products and high margin strategy that has served them well for decades, I have lately been wondering if we could see a bit of a shift in pricing over the next few years. If you look at the services category in the chart above, this segment of their business is one that is multiplying. Services brought in $8.5. Billion in revenue last quarter and continues to grow.
However, it is growing because it is tied to hundreds of millions of iOS devices, which include iPhones, iPads, iPods and Macs of all flavors. For services to continue to grow, Apple needs to continue to see hundreds of millions of new iOS devices sold year after year.
Today’s premium pricing serves Apple well today, but I believe there will be a time when Apple will need to rethink pricing for future smartphones and perhaps even iPads and Mac’s that have lower ASP’s and thus lower margins if they want to keep their services business growing. If and when that will happen is anybody’s guess, but Apple and Wall Street understand that their services business is a critical part of Apple’s future growth and for it to keep growing it will need to be connected to more and more iOS devices in the future.
Premium products and premium pricing can always drive the lions to share of profits, but I believe Apple may have to create a range of products that have lower ASP’s and slightly lower margins if they want to keep seeing their services business expand and grow and remain a big contributor to their bottom line.
It’s certainly not what I expected. After all, when it comes to a show that’s traditionally focused on the telecom industry, audio typically just refers to voice.
But at this year’s Mobile World Congress (MWC) trade show in beautiful Barcelona, Spain, several of the biggest product announcements actually share a similar characteristic—a focus on sound and audio. In this case, it’s the surround sound technology from Dolby called Atmos. Both Samsung’s new flagship S9/S9+ phones and the Huawei Matebook X Pro notebook feature it, as well as the new Lenovo Yoga 730 notebook, which was also announced here at MWC. The convertible 2-in-1 Yoga 730 also extends its audio features through the integration of both the Alexa and Cortana digital assistants, as well as array microphones to allow you to use the device from a distance, similar to standalone smart speakers.
To be clear, all of these devices offer a variety of other important new technologies that go well beyond audio, but these sound-focused capabilities are interesting for several reasons. First, Dolby’s Atmos is a really cool, impressive technology that people will notice as being a unique feature of these products. But in addition, the inclusion of Atmos is the kind of more subtle type of improvements that are becoming the primary differentiator for new generations of products in mature product categories such as smartphones and PCs.
Walking through the halls of the convention center you could easily find collections of very nice-looking client devices amidst the telecom network equipment, IoT solutions, autonomous car technologies, and other elements that are a big part of MWC. What was impossible not to notice, however, is that they pretty much all looked the same, particularly smartphones. They’ve morphed their physical designs into little more than flat slabs of glass. Even many of today’s superslim notebook PC designs also have fairly similar designs. In both cases, the form factors of the devices are quite good, so this isn’t necessarily a bad development, but they are getting harder and harder to quickly tell apart from a visual perspective.
As a result, companies are having to incorporate interesting new technologies within their devices to offer some level of differentiation from their competition. That’s why the Dolby Atmos integration is an interesting reflection on the state of modern product development.
At its heart, Atmos is the next evolution of Dolby’s 30+ year history of creating surround sound formats and experiences. Originally developed for move theaters and now more commonly found on home audio products like soundbars and AV receivers from big consumer electronics vendors like Sony, Atmos offers two important additions and enhancements to previous versions of Dolby’s surround sound technologies. First, from an audio perspective, Atmos highlights the ability to position sounds vertically and horizontally, delivering an impressive 360˚ field of sound that really makes you feel like you’re in the center of whatever video (or gaming) content you happen to be watching. Previous iterations of Dolby (and rival DTS) technology have had some of these capabilities, but Atmos takes it to a new level in terms of performance and impact. The technology primarily achieves this through the concept of head-related transfer functions (HRTFs), which emulate how sounds enter our ears at slightly different times and are influenced by the environment around us.
The second big change for Atmos involves the audio file format. Unlike previous surround sound technologies, where the position of sounds was fixed, in Atmos sounds are described as objects and will sound slightly different depending on what types of speakers and audio system they’re being played through. Essentially, it optimizes the surround sound experience for specific devices and the components they have.
The effectiveness of this approach is very clear on Huawei’s Matebook X Pro, where the Dolby Atmos implementation leverages the four different speakers put into the notebook. One critique of surround sound technologies is that they’re effectiveness can be dramatically impacted by where you are sitting. Basically, you really need to be in the sound “sweet spot” to get the full impact of the effect. What’s interesting about the implementation of Atmos in a notebook is that it’s almost impossible to not be in the sweet spot if you’re viewing content directly on the screen in front of you. As a result, the audio experience with Dolby Atmos-enabled content on the Matebook X is extremely impressive—it’s actually a second-generation implementation for Huawei and Dolby and it’s quite effective.
For mobile devices like the new Samsung S9/S9+, Dolby Atmos can be delivered both through stereo speakers (a feature that many smartphones still don’t have) or, even more effectively, through a headphone jack. In fact, the implementation of Atmos on the S9 is probably the most effective argument in favor of having/needing a headphone jack on a smartphone that I’ve seen. With headphones, you get a truly immersive surround sound experience with Atmos-enabled content on the S9/S9+, and through the speakers on either end of the S9/S9+ you also get an audio experience that’s much better than with most other smartphones.
In the case of the Lenovo Yoga 730, the Atmos implementation is only via the headphone jack, but once connected to a standard set of headphones, it gives you the same kind of virtual surround experience of the other devices.
Admittedly, not everyone cares about high-quality audio as much as I do but given how much video content we consume on our devices, either through streaming services like Netflix, or just as part of our social media or news feeds, I believe it can be an important differentiator for vendors who deploy it. Plus, it’s important to set our expectations for the kinds of advancements that the next few generations of our devices are likely going to have. They may not be as dramatic as folding screens, but technologies like Dolby Atmos can certainly improve the overall experience of using our devices.
Last week, I wrote about how the whole of the consumer tech industry has reached maturity. The point of that post was to articulate, at a high level, how the maturity of the industry changes the nature of the way companies will compete. Today, I’d like to talk about the impacts that observation will impact smartphones.
On Sunday, Feb 25, on the eve of Mobile World Congress, in Spain, Samsung launched its latest flagship smartphones the Samsung Galaxy S9 (S9) and the Galaxy S9+ (S9+). The products build on a successful formula of design and technology that we have been accustom to with the Galaxy S line. The Infinity Display that was introduced with the Galaxy S8 gets even more immersive this year as Samsung shaves off the lip and the chin on the S9 and S9+. The experience is even more immersive especially when watching video content and taking advantage of the new Dolby Atmos sound. The camera gets better low light and super slow-mo video both features high-up in consumers needs. At the event, Samsung showed clever ways to use the slow-mo videos you create from using it as a lock screen, to watch it in reverse and also to create a GIF.
If you are buying the S9+, you are also getting a dual camera for the first time in a Galaxy S model. Lastly, AR emojis based on a data-based machine learning algorithm, which analyzes a 2D image of the user and maps out more than 100 facial features to create a 3D model that reflects and imitates expressions of the user as to deliver a very personalized experience that can be shared on third-party messaging platforms. If you are not keen on using your own picture you can always pretend to be Mickey Mouse or one of The Incredibles.
It was also interesting to hear Samsung talk about AR beyond emojis. Their early move into VR had some industry watchers think that AR was not going to be an area where we would see much involvement from Samsung. Clearly, this is not the case as we have seen Bixby Vision add capabilities in that area from translating signs to estimating calories. At the launch, through the iOS and Android Unpacked app the audience was able to visualize the new Galaxy S9 by pointing our phone camera at the event pass we were given.
With the S9 and S9+, Samsung addressed some of the complaints users of the Galaxy S8, and S8+ had in particular with the position of the fingerprint reader which is now below the camera rather than on the side of it like last year. Another improvement was made to face recognition with the addition of Intelligent Scan which brings together iris scanning and facial recognition for a more accurate and convenient way to unlock your phone. Samsung did not make any claim around security for Intelligent Scan but highlighted that this new solution would improve the experience in case of low light, for instance.
Many of the new features that the Galaxy S9 sports show off software, semiconductor, and hardware expertize. I still feel that Samsung is not talking about software in a very confident way unless it is around Knox. While historically, software has not been Samsung’s strongest point, I feel this has changed since DJ Koh took on the leadership of the Mobile Division. Going forward, with more of the value add shifting from hardware to software, Samsung must learn to talk confidently about software even when, maybe like in the case of Bixby at this launch, the progress is not as strong as they would have wanted it.
Reimaging an already Successful Product is Hard
There is no question that owners of older Galaxy S models will find this upgrade appealing especially with the dual camera system on the S9+. The more difficult audience to convince are the current Galaxy S8 and S8+ owners who are not in the “I always want the latest model” or “I am on an annual replacement plan” groups. Samsung is certainly not alone in trying to grow the numbers of users who upgrade every year.
The problem the industry is facing is, however, that with enhancements that are less about hardware and more about an improved experience it gets harder to convince users because it takes more time and effort to show the value add that is being delivered. This is especially true when the price tags of these devices grow every year. Trade-in incentives like the one Samsung is offering with the pre-orders of the S9 and S9+ certainly help. In the future, I do wonder if brands targeting the high-end of the market will have to get into financing plans more aggressively than they have done thus far.
This year, Samsung also has the added challenge of attracting potential buyers from Apple. Such a challenge has nothing to do with the S9 and S9+ not being great smartphones, but it is simply because much of what they have to offer was already in their predecessor and they are missing the wow effect that Apple got with the iPhone X where they caught up with some of the Samsung’s features: wireless charging, OLED screen, and edge-to-edge screen. This is why answering the most often asked question: “can the Galaxy S 9 compete with the iPhone X” is not that straightforward nor would it be accurate to call the S9 the response to the iPhone X. From a technology perspective, the S9 and S9+ have very little to envy the iPhone X. Yet from a portfolio perspective they feel more like an evolution of the previous products, the same way the iPhone 8 and 8 Plus did with the iPhone 7 and 7 Plus.
Samsung’s Portfolio conundrum
Considering what I just said it is easy to see how the bar for Samsung and its next Galaxy S will be much higher. If we assume that the next Galaxy S will be called Galaxy S 10 and will be launched a year from now, I would expect Samsung to bring in quite a considerable change in design to mark such a milestone and remain ahead of the competition.
The big question is how Samsung will continue to juggle the two product families of the Galaxy S and Galaxy Note. The audiences are slightly different in how they prioritize what they want on their device. Both want great design and technology, but while the Galaxy S users put design first, the Galaxy Note users put tech First.
With the Galaxy Note 8, we started to see very strong similarities in the design language which makes me wonder how long these separate families can be justified. Even from a technology perspective, we see a fast adoption cycle of a technology launching in the Note in late summer and then coming to the S in the following spring. Right now the only difference between the Galaxy S9+ and the Note 8 is really just the S Pen.
Will Samsung be able to drive design innovation on the Galaxy S 10 while at the same time deliver differentiated technological innovation with the Note 9? Here, I am not questioning Samsung’s capability to deliver both; I have confidence they can. My question is a portfolio question around how Samsung will do that in a way that does justice to the Galaxy S10 but keeps the Note 9 technologically ahead. Apple does not have this issue as everything is called an iPhone which simply means prioritizing models versus families of products.
As for now, there is enough in the new Galaxy S9 and S9+ to get consumers to buy, especially if Samsung is clever in creating excitement around the new features both online and in stores.
This is going to be a controversial column. But these are not normal times. Two terrible things happened last week: The Mueller investigation found that the Russians interfered in the 2016 election. And 17 people were tragically killed in a school shooting in Parkland, Florida. Although these two events are clearly not related, one common thread is that the three prevalent global communications, advertising, and social networks — Google, Facebook, and Twitter — played a role.
Note that Robert Mueller, in his report, described the Russian actions as “information warfare”. In the case of the Florida shootings, Nikolas Cruz had placed a variety of gun- and violence-related posts on social media. Now I am not saying that these companies are to blame for what happened, or bear any direct responsibility. But this is a moment in time to recognize that Google, Facebook, and Twitter, while enabling many wonderful things in our daily lives, can also be tools or vehicles for some very bad things. I single these companies out because of their outsized global role and influence: Google is the dominant search engine; Google and Facebook own a huge percentage of the market for digital advertising; and Facebook and Twitter between them are prevalent sources of digital communications, and news/content distribution and consumption.
I believe it is time to start having a serious conversation about the role these companies should play in our national interests. If cyberattacks represent among the greatest dangers to the international community today, one could argue that companies such as Google, Facebook, and Twitter could be the digital/information equivalent of giant defense contractors such as Raytheon and Northrop Grumman. Now this might get me into hot water on privacy and a host of other issues, but I believe that the Department of Defense and key intelligence agencies should be working a lot more closely with these companies than they probably already are. I would also argue that Google, Facebook, and Twitter, as U.S.-based companies, have some obligation here as well. One could see this as the 21st century equivalent of the Manhattan Project.
In the wake of the Russia investigation, we should be demanding that Facebook take steps to prevent similar meddling in the mid-term elections and the 2020 Presidential election. Of course, this might go against their politics, their ethos, and might not even be good for their bottom line. But, their collective resources, the information they possess, and their growing capabilities in AI and big data are becoming as important to our national security as any military hardware. In the wake of the Mueller investigation, and other sordid examples of cyberwarfare, shouldn’t Messrs. Pichai, Zuckerburg, and Dorsey be raising their hands and asking, “How can we help?”
Among the many issues a conversation like this raises are how this would be operationalized. After all, these are global companies, and they do huge business in some of the countries that are not exactly our friends. But there should be some arm, or division, within these firms that provides critical services to our national intelligence. It’s likely that cyber-intelligence will be as critical in preventing, say, an attack on our electrical grid or our banking system as any satellite, drone, or other physical piece of military technology.
If it were known that there was a stronger relationship between our government and this tripartite, then perhaps our enemies would think twice about using them as platforms for bad behavior. Plus, the public might feel reassured that our defense agencies are more ‘on it’ on the cyber front.
Now, I think there are similar issues and obligations with respect to incidents such as the Florida school shooting, or other recent mass casualty events, such as the shootings in Orlando and Las Vegas. These are incidents of domestic terrorism. In some cases, ISIS or other international bad actors might be involved (or certainly influence). A critical question is whether a company such as Facebook has an obligation to more systematically alert the authorities when someone such as Nikolas Cruz posts what he posted. What are Google’s obligations if someone is doing a search on “how to make a bomb”? This clearly gets us into murky territory on issues such as privacy. But we should recognize that our ability to use a platform such as Facebook or Twitter represents sort of a social contract. We know every day that our searches are being tracked, by virtue of the ads that we see. So, for example, if we’re talking about better background checks, doesn’t it make sense to be thinking about how someone’s actions on Google/Facebook/Twitter might figure in here? The rapid development of AI and analytics tools should be able to be helpful in alerting us to whether someone’s application for a gun might be for illicit purposes. Still further, perhaps these tools can be used to enable us to get help to these individuals before it’s too late.
We are in a unique moment in time. The same digital/cyber tools that make our lives better, more convenient, and entertaining are also enablers of some of our society’s darkest forces and attacks on our personal and national security. So I believe it is time to be having a deeper, and likely difficult and uncomfortable conversation, about the role the Internet giants should have in working more closely with those agencies that we pay and expect to protect and defend us, on a local and national level.
Google launched Android Enterprise Recommendation Program
Android Enterprise Recommended establishes best practices and common requirements for devices and services, backed by a thorough rigorous testing process conducted by Google. The program addresses the top concerns we’ve heard from customers: a need for frequent security updates, reliable and consistent software experiences, and simplified device selection. Many of the world’s top manufacturers are already involved in the program, including Blackberry, Huawei, LG, Motorola, Nokia, and Sony. Devices within the program meet a demanding set of specifications for organizations with challenging and diverse business requirements. Also, a benefit is the enhanced level of technical support and training enterprises receive from Google through the Android Academy.
It is becoming clear the consumer tech industry has reached maturity. This seems like an odd thing to say given how long the tech industry has been around, but as I’ve observed through the years, the consumer tech part of the industry is relatively young. PCs didn’t truly go mass market until the mid-2000s and the most personal computer ever invented, the smartphone is only now showing signs of functioning like a mature market. Nearly every single data point I collect sends a signal that consumer tech is now reaching full maturity which will shift the industry dynamics dramatically for everyone looking to compete.
Intel was one of the early noise makers around the upcoming transition from 4G to 5G cellular technology, joining Qualcomm in promising the revolution would change the way we interact with nearly everything around us. Though not exactly a newcomer to the world of wireless technology with the launch of Wi-Max under its belt, Intel has much to prove in the space of wireless communications.
New information this week surrounds Intel’s partnerships with key PC vendors to bring 5G-enabled notebooks to market in late 2019 and a deal with Chinese chip vendor Spreadtrum to use Intel 5G modems in smartphones in late 2019.
The strategy that Intel been progressing with for several years is holistic in nature, covering everything from processors for the cloud server infrastructure and datacenter to network storage to cellular modems and even device-level smart-device chips. This broad and extensive approach provides Intel with some critical advantages. It can leverage the areas where it is considered a leader already for safer bets while diving into riskier areas like the cellular interface itself (modem) where competition from traditional technology providers like Qualcomm lead.
5G Cloud System Advantage
Intel’s core markets in the spectrum of 5G technology lie with systems that depend on hardware designs that the company is already dominant in. Cloud datacenters, for example, are powered today by Intel servers using its Xeon product family that holds more than a decade of unrivaled leadership. Even the network storage and virtualization segments that connect the cloud systems to the cellular networks favor the Intel architecture and design, with years of software development and enterprise expertise under its belt.
Managers and CTOs are intimately familiar with the capabilities and performance that Intel provides in these spaces, and feel more comfortable adopting the company’s chips for the 5G migration coming in 2018 and 2019.
Edge Computing Creates Growth
Edge computing is a new and growing field for systems and represents the migration of higher performance servers from the centralized datacenter to as near to the consumer as possible. This could materialize as hardware living at the site of each cellular antenna or collections of servers distributed to key locations around the country, addressing large urban populations.
As the movement to smart cities, robotics, and multi-purpose drones grows along with 5G, the need for analytics, off-loaded processing, and data storage to be closer to the edge increases. This data and compute proximity lowers dependency on any single datacenter location and improves performance while reducing latency of the interactions.
These edge compute roll-outs will offer a significant revenue growth opportunity all players, including AMD, but Intel’s leadership in the server space will provide increased potential.
Intel Behind in 5G Modems
Moving the discussion to the cellular networks themselves and the need for a 5G modem in devices like smartphones and PCs, Intel has a very different outlook. Modem technology and analog signaling is a more complex field that most understand, and the lack of experience for Intel’s teams is a significant concern. Qualcomm has publicly stated several times that it believe it holds a 12-24 month lead on any competitor in the 5G modem space.
Intel has a 5G modem called the XMM 8060 and at the 2018 Winter Olympics Intel has been demoing 5G technology through various VR experiences. However, the 5G integration in use there is not on the final chip that the XMM 8060 will ship as but instead is a “concept modem” that is used for trials, diagnostics, and product tuning. All technology vendors use this tactic to gain knowledge about products in the design phase, but it’s rare to see significant public demonstrations using them.
At Mobile World Congress in Barcelona next week, Intel will be showing the world’s first 5G-enabled Windows PC. Hot on the heels of Qualcomm touting the future of connected Windows devices shipping this quarter, Intel is eager to assert its dominance in the PC world and showcase the future of mobile computing. Partnering with Dell, HP, Lenovo, and Microsoft to enable 5G connectivity alongside Intel Core processors, Intel believes the market will see 10-15% attach rates for cellular modems on notebooks over the next 3-5 years. Computers based around this 5G technology aren’t expected to ship until holiday 2019 and the demoed prototype is still using the “concept modem” from Intel rather than final silicon.
The second announcement from MWC next week is Intel’s partnership with Spreadtrum, a Chinese semiconductor company that builds mobile processors for smartphones. As part of a multi-year agreement, Intel will provide a series of XMM 8000-series 5G enabled modems for Spreadtrum to use in conjunction with its own mobile chips beginning, again, in the second half 2019. Though Spreadtrum is a small SoC vendor globally, having any partners announced this far in advance is a positive sign. However, if you compare this to recent Qualcomm announcements that included 18+ device OEMs that will be using its 5G modems this year, Intel is well behind.
The Apple Possibility
There is one possible exception: Apple. Rumors continue to circulate that Apple may be trying to remove Qualcomm modems from its iPhone product family completely in 2019 or 2020, with Intel being the obvious modem replacement. If that holds true, Intel will have an enormous customer account to justify its development costs. Being associated with a company often considered the most advanced in the mobile space has its advantages too.
Apple has indicated that it sees 5G technology as a 2020 growth opportunity, which would allow time for Intel to finalize the XMM 8060 modem. Competing Android devices are expected to ship in late 2018 and ramp in 2019 using Qualcomm 5G modems.
As we are continuing to be reminded, social media is a powerful force. It has given individuals a powerful voice and empowered communities to stand up to governments, oppression, and a range of social issues. While we have been made keenly aware of the benefits of social media, we are continually reminded of late to the downsides. Downsides which many never saw coming, and in line with the categories name, have the potential to for negative social disruption.
Like millions of other people throughout the US, this weekend, I went with my family to watch “Black Panther” the latest Marvel hero movie. I set through the two hours and fifteen minutes of this celebration of Black ingenuity, strength and power and I marveled – no pun intended – at some of the special effects that showcased the technology made in Wakanda.
If you are not familiar with the story I can share without spoilers that Wakanda is a reclusive kingdom in Africa with extraordinary technological power mostly by vibranium a material that absorbs kinetic energy rendering it virtually indestructible. Master in command of most of the technology we see displayed in the movie is Princess Shuri, younger sister to King T’Challa, engineer and scientist extraordinaire and according to my daughter the most powerful woman in Wakanda. There was a lot of tech on display in the movie, but some stood out as something I wished I had access to.
Remote Transmission and Holograms
There are two scenes in the movie where remote transmission and holograms play a big role. One is a car chase in South Korea where Shuri is operating a real car from a holographic replica in her labs in Wakanda and the other is that of CIA Agent Ross taking a spaceship into battle from a holographic replica in the labs.
Let’s start with how cool it would be to be able to control cars and other moving objects that are miles away from you. In a world where we are getting excited about self-driving everything it might seem like we do not need this technology but when I saw the car scene the first thing that went through my head was: I bet there could be a “designated driver service” opportunity! And I am not just thinking about avoiding drunk-drivers but also about the ability to monitor drivers and take over a car for tired or incapacitated drivers. Of course, the complexity of being able to remotely control a moving object with no time lag is immense. Yet, I would guess that self-driving cars will be relying on many of the same components from sensors and computing power to network coverage and speed minus the human component that remain in control of the vehicle.
As excited as I got about the potential of having a personal chauffeur when I needed it and unlike Uber and Lyft not having to worry about being in the same car as them I was even more excited about the future potential of holograms.
I have had Microsoft HoloLens demos, and I have tried both AR and VR and as immersive as these experiences are you are still missing the tactile component. Haptics has been used with touch screens for many years so that we can receive a tactile feedback every time we touch a specific key on our smartphone or PC. With VR we need to move this type of 3D touch experience to our full body to really have a fully immersive experience. The most off-putting part of VR today, in my opinion, are the controllers that in most cases prevent you from feeling fully immersed just because they cannot replicate a wide enough range of gestures. Sure you can pull a trigger, but you often cannot grab and hold an object in a very natural way. There are some data gloves that can both track hand motion and use air bladders to harden and restrict grip so that you can feel an object like a ball in virtual reality but these are very high-end solutions that might hit enterprises before they can be made available for consumers. The CyberGrasp is good examples of such solutions. There are also solutions like the Teslasuit that aims at developing a full body suit that can feel impacts and temperatures as well as track full body movement. The opportunity for such solutions in a B2C environment is immense, from entertainment to health touch will take VR to the next level.
There are a few implementations of the Kimoyo beads in the movie, but the most obvious one is in bracelets where each of the beads serves a different function. You can see many Wakandans wearing these bracelets throughout the film. There is a Prime Bead that provides a lot of information about the wearer’s health as Wakandans wear these from the moment they are born. There are Audio/Visual Beads that provide a holographic display (with no need for special glasses) with access to the kingdom database. The size of the screen can be altered to go from personal use to shared. Communications beads that can be used in a similar way to mobile phones but they also use sign language to send text messages. Finally, there are simpler beads with single function sensors like geotagging.
What I love about these bracelets is the promise of what wearables could be going forward. In particular, I like that these Kimoyo bracelets reflect jewelry that is quite common among African populations hinting at the fact that technology is better adopted when hidden in everyday objects. This was the hope for smartwatches one however that struggled to materialize as the union of technology and fashion did not quite deliver for many brands. Apple Watch is the success story that seems to embody many of the Kimoyo beads capabilities: communication, audio and more and more health.
Over the years we have seen smartphones toy with the idea of using gestures to execute some functions from dialing a number by writing the numbers in the air with the phone, to hanging up by turning the phone screen-down. Smartwatches have yet to incorporate gestures as part of communication. I am not saying it would be easy, especially for an Italian like me whose hands can say more than words but I find it interesting that the idea of using sensors on wearables and gestures to either spell out letters or perform specific actions has not been entertained enough to get to market.
Our phones are becoming repositories of all sort of information, and recently Apple shared its plans to add health records to the list of data these gadgets hold. Wearables would be an even better location for at least some 911 type of information. Think, for example, at an accident that requires the intervention of a paramedic who can get vital information such as allergies and medications right from the wearables rather than having to spend time searching for a phone that might have been thrown feet away from the patient.
There is certainly much more than wearables can be as long as we keep the right balance of design and usefulness. The beads concept of sharing use cases and features through different beads could extend to bands that you wear with your watch depending on your activity need: a general band, a work-out band, a security band and so on.
Some of these technologies are closer than we think, how they will be marketed and how we as a society are prepared to accept them and embrace them remains to be seen. More importantly, there are choices that Silicon Valley will have to make. Of course, Silicon Valley will not create technology for the benefit of its own people, as Silicon Valley is not a kingdom but a conglomerate of corporations. Yet, I believe such corporations are called more and more to ponder whether they will create for the pleasure of some versus the benefit of all.
One of the most important markets for the tech industry is the connected home. Connected thermostats, televisions, lights, appliances, security cameras, door locks, etc have gained strong consumer interest around the world and is at the heart of making homes and even offices smarter. I have been studying the connected home since 2002 and wrote one of the first reports on this idea in 2004 about having a home with devices connected to the Internet. In that report, I stated that while I saw the potential of connected devices, I believed that they would only gain real traction when they had more processing power behind them, better connectivity, and were controlled centrally.
On the eve of the world’s largest trade show dedicated to all things telecom—Mobile World Congress (MWC), which will be held next week in beautiful Barcelona, Spain—everyone is extraordinarily focused on the next big industry transition: the move to 5G.
The interest and excitement about this critical new network standard is palpable. After years of hypothetical discussions, we’re finally starting to see practical test results, helped along by companies like National Instruments, being discussed, and realistic timelines being revealed by major chip suppliers like Qualcomm and Intel, phone makers like Samsung, network equipment providers like Ericsson, as well as the major carriers, such as AT&T and Verizon. To be clear, we won’t be seeing the unveiling of smartphones with 5G modems that we can actually purchase, and the mobile networks necessary to support them until around next year’s show—and even those will be more bleeding edge examples—but we’ve clearly moved past the “I’m pretty sure we’re going to make it” stage to the “let’s start making plans” stage. That’s a big step for everyone involved.
As with the transition from 2G to 3G and 3G to 4G, there’s no question that the move to 5G is also a big moment. These industry transitions only occur about once a decade, so they are important demarcations, particularly in an industry that moves as fast as the tech industry does.
The transition to 5G will not only bring faster network connection speeds—as most everyone expects—but also more reliable connections in a wider variety of places, particularly in dense urban environments. As connectivity has grown to be so crucial for so many devices, the need for consistent connections is arguably even more important than faster speeds, and that consistency is one of the key promises that we’re expecting to see from 5G.
In addition, the range of devices that are expected to be connected to 5G networks is also growing wider. Automobiles, in particular, are going to be a critical part of 5G networks in the next decade, especially as more assisted, semi-autonomous and (eventually) fully autonomous cars start relying on critical connections between cars and with other road-related infrastructure to help them function more safely.
As exciting as these developments promise to be, however, it’s also becoming increasingly clear that the switchover from 4G to 5G will be far from a clean, distinct break. In fact, 5G networks will still be very dependent on 4G network infrastructure—not only in the early days when 5G coverage will be limited and 4G will be an essential fallback option—but even well into the middle of the 2020s and likely beyond.
A lot of tremendous work has been done to build a robust 4G LTE network around the world and the 5G network designers have wisely chosen to leverage this existing work as they transition to next generation standards. In fact, ironically, just before the big 5G blowout that most are expecting at this year’s MWC trade show, we’re seeing some big announcements around 4G.
The latest modem from Qualcomm—the X24, for example, isn’t a 5G model (though they’re previously announced X50 modem is expected to be the first commercial modem that will comply with the recently ratified 5G NR “New Radio” standard), but rather a further refinement of 4G. Offering theoretical download speeds of up to 2 Gbps thanks to 7X carrier aggregation—a technology that allows multiple chunks of radio bandwidth to function as a single data “pipe”—the X24 may likely, in fact, offer even faster connection speeds than early 5G networks will enable. In theory, first generation 5G networks should go up to 4 Gbps and even higher, but thanks to the complexities of network infrastructure and other practical realities, real-world numbers are likely to be well below that in early incarnations.
Of course, this is nothing new. In other major network transitions, we saw relatively similar phenomena, where the last refinements to the old network standards were actually a bit better than the first iterations of the new ones.
In addition, a great deal of additional device connectivity will likely be on networks other than official 5G for some time. Standards like NB-IoT and Cat M1 for Internet of Things (IoT) applications actually ride on 4G LTE networks, and there’s little need (nor any serious standards work being done yet) to bring these over to 5G. Even in automotive, though momentum is rapidly changing, the “official” standard for vehicle-to-vehicle (V2V) connections in the US is still DSRC, and the first cars with it embedded into them just came out this year. DSRC is a nearly 20-year old technology, however, and was designed well before the idea of autonomous cars became more of a reality. As a result, it isn’t likely to last as the standard much longer given the dramatically increased network connectivity demands that even semi-autonomous automobiles will create. Still, it highlights yet another example of the challenges to evolve to a truly 5G world.
There is no question that 5G is coming and that it will be impactful. However, it’s important to remember that the lines separating current and next generation telecom network standards are a lot blurrier than they may first appear.
This week Device as a Service (DaaS) pioneer HP announced it was expanding its hardware lineup. In addition to adding HP virtual reality products including workstations and headsets, the company also announced it would begin offering Apple iPhones, iPads, and Macs to its customers. It’s a bold move that reflects the intense and growing interest in this space, as well as Apple’s increasingly prominent role on the commercial side of the industry.
First Came PCaaS IDC’s early research on PC as a Service (PCaaS) showed the immense potential around this model. It’s exciting because it is a win/win for all involved. For companies, shifting to the as a service model means no longer having to budget for giant capital outlays around hardware refreshes. As IT budgets have tightened, and companies have moved to address new challenges and opportunities around mobile, cloud, and security, device refreshes have often extended out beyond what’s reasonable. Old PCs limit productivity and represent ongoing security threats, but that’s not stopped many companies from keeping them in service for five years and more.
PCaaS lets companies pay an ongoing monthly fee that builds in a more reasonable life cycle. That fee can also include a long list of deployment and management services. In other words, companies can offload the day-to-day management of the PC from their IT to a third party. And embedded within these services are the ability of the provider to capture analytics that helps guide future hardware deployments and ensure security compliance.
PC vendors and other service providers offering PCaaS like it because it allows them to capture more services revenue, shorten product lifecycles, and smooth out the challenges associated with the historical ebb and flow of big hardware refreshes often linked to an operating system’s end of life. HP was the first major PC vendor to do a broad public push into the PCaaS space, leveraging what the company learned from its managed print services group. Lenovo has been dabbling in the space for some time but has recently become more public about its plans here. And Dell has moved aggressively into the space in the last year, announcing its intentions at the 2017 DellWorld conference. Each of the three major PC vendors brings its own set of strengths to the table in this competitive market.
Moving to DaaS HP’s announcement about offering more than just PCs, as well as Apple devices, is important for several reasons. Chief among them is that in many markets, including the U.S. (where this is launching first), iOS has already established itself as the preferred platform in many companies. By acknowledging this, HP quickly makes its DaaS service much more interesting to companies who have shown an interest in this model, but who were reluctant to do so if it only included PCs. Second, while HP has a solid tablet business, it doesn’t have a viable phone offering today. For many companies, this would be an insurmountable blocker, but to HP’s credit, it owned this issue and went out and found the solution in Apple. It will be interesting to see if the other PC vendors eventually announce similar partnerships with age-old competitors. It’s worth noting that Dell also doesn’t have a phone offering, while Lenovo does have a phone business that includes the Moto brand.
It was also very heartening to see HP announce it would begin offering its virtual reality hardware as a service, too. Today that means the HP Z4 Workstation and the HP Windows Mixed Reality VR headset, but over time I would expect that selection to grow. As I’ve noted before, there is strong interest from companies in commercial VR. By offering the building blocks As A Service, HP enables companies to embrace this new technology without a massive capital outlay up front. I would expect to see both Dell and Lenovo, which also have VR products, to do the same in time. And while VR represents a clear near-term opportunity, Augmented Reality represents a much larger commercial opportunity long term. There’s good reason to believe that many companies will turn to AR as a Service as the primary way to deploy this technology in the future. And beyond endpoint devices such as PCs, tablets, phones, and headsets, it is reasonable to expect that over time more companies will look to leverage the As A Service model for items such as servers and storage, too.
Today just a small percentage of commercial shipments of PCs go out as part of As a Service agreement, but I expect that to ramp quickly in the next few years. The addition of phones, tablets, AR/VR headsets, and other hardware will help accelerate this shift as more companies warm to the idea. That said, this type of change doesn’t come easily within all companies, and there will likely continue to be substantial resistance inside many of them. Much of this resistance will come from IT departments who find this shift threatening. The best companies, however, will transition these IT workers away from the day-to-day grind of deployment and management of devices to higher-priority IT initiatives such as company-wide digital transformation.
At IDC we’re about to launch a new research initiative around Device as a Service, including multiple regional surveys and updated forecasts. We’ll be closely watching this shift, monitoring what works, and calling out the areas that need further refinement. Things are about to get very interesting in the DaaS space.
I have been spending a lot of time with clients and people in the industry lately about Qualcomm and Microsoft’s push to create an ARM-based platform for Windows-based laptops. Although these two companies launched a Windows on ARM program four years ago, that initiative failed due to the underpowered ARM processor available at that time and the version of Windows used on these laptops that did not work well on these early ARM based laptops.
Nearly a full year after the company started revamping its entire processor lineup to catch up with Intel, AMD has finally released a chip that can address one of the largest available markets. Processors with integrated graphics ship in the majority of PCs and notebooks around the world, but the company’s first Ryzen processors released in 2017 did not include graphics technology.
Information from Jon Peddie Research indicates that 267 million units of processors with integrated or embedded graphics technology were shipped in Q3 2017 alone. The new AMD part that goes on sale today in systems and the retail channel allows AMD an attempt to cut into Intel’s significant market leadership in this segment, replacing a nearly 2-year-old product.
Today AMD stands at just 5% market share in the desktop PC space with integrated graphics processors, a number that AMD CEO Lisa Su believes can grow with this newest Ryzen CPU.
Early reviews indicate that the AMD integrated graphics chips are vastly superior to the Intel counterparts when it comes to graphics and gaming workloads and are competitive in standard everyday computing tasks. Testing we ran that was published over at PC Perspective shows that when playing modern games at mainstream resolutions and settings (720p to 1080p depending on the specific title in question), the Ryzen 5 2400G is as much as 3x faster than the Core i5-8400 from Intel when using integrated processor graphics exclusively. This isn’t a minor performance delta and is the difference between having a system that is actually usable for gaming and one that isn’t.
The performance leadership in gaming means AMD processors are more likely to be used in mainstream and small form factor gaming PCs and should grab share in expanding markets.
China and India, both regions that are sensitive to cost, power consumption, and physical system size, will find the AMD Ryzen processor with the updated graphics chip on-board compelling. AMD offers much higher gaming performance using the same power and at a lower price. Intel systems that want to compete with the performance AMD’s new chip offers will need to add a separate graphics card from AMD or NVIDIA, increasing both cost and complexity of the design.
Though Intel is the obvious target of this new product release, NVIDIA and AMD (ironically) could also see impact as sales of low-cost discrete graphics chips won’t be necessary for systems that use the new AMD processor. This will only affect the very bottom of the consumer product stack though, leaving the high-end of the market alone, where NVIDIA enjoys much higher margins and market share.
The GT 1030 from NVIDIA and the Radeon RX 550 from AMD are both faster in gaming than the new Ryzen processor with integrated graphics, but the differences are in an area where consumers in this space are like to see it as a wash. Adding to the story is the fact that the Ryzen processor is cheaper, draws less power, and puts fewer requirements on the rest of the system (lower cost power supply, small chassis).
AMD’s biggest hurdle now might be to overcome the perception of integrated processor graphics and the stigma it has in the market. DIY consumers continue to believe that all integrated graphics is bad, a position made prominent by the lack of upgrades and improvements from Intel over the years. Users are going to need to see proof (from reviews and other users) to buy into the work that AMD has put into this product. Even system integrators and OEMs that often live off the additional profit margin of upgrades to base system builds (of which discrete graphics additions are a big part) will push back on the value that AMD provides.
AMD has built an excellent and unique processor for the mainstream consumer and enterprise markets that places the company in a fight that it been absent from for the last several generations. Success here will be measured not just by channel sales but also how much inroad it can make in the larger consumer and SMB pre-built space. Messaging and marketing the value of having vastly superior processor graphics is the hurdle leadership needs to tackle out the gate.
It is no secret that I’ve been very bullish on Apple Watch since day one. I’ve held my ground against the naysayers and defended this product because I believed in it and the broader role it can and will play in the future of computing. After a rough second year, when many of the naysayers thought they were right, Apple Watch is truly gaining steam.
It is interesting that Apple’s HomePod has ignited a broader philosophical debate, within the tech industry and industry pundits and observers, around what is really at stake with voice assistants in the future. Everyone has an opinion on this, and there is a degree of implications for the future to be thought about as well.
I suppose, first, we should ask if we want a PC at all! Our recent US study run across 1262 consumers says we do. Less than one percent of the panelists said they have no intention to buy another PC or Mac. As a matter of fact, twenty-five percent of the panel is in the market to buy a new PC or Mac in the next twelve months.
What do We want When buying a Notebook?
Well, it depends who you are!
No matter which brand of PC we own, or how savvy of a user we are, when it comes to notebooks there is one thing we want out of the next computer we are buying: a longer battery life. Fifty-nine percent of our panelists picked battery life as a must-have feature in their next PC – one third more than for any other feature.
The other top two features are a little different depending on the camp you are in. While not strictly a feature, brand comes as the second most important consideration point for Mac buyers, which I am sure is not a surprise as with brand you buy into a whole entire ecosystem of both software and devices. Outside of Apple users, current PC owners only view brand as the sixth most important feature which causes some interesting challenges to the many brands in the Windows ecosystem trying to establish themselves in the high-end. Going back to hardware, what comes after battery very much depends on the kind of user you are. For early adopters a higher resolution display matters (34%) but for everybody else, including Mac owners, it is about more memory.
So where is connectivity in the list of features for our next notebook? Not much of a priority it seems.
Only 23% of our panel picked cellular connectivity as one of the top three features they want in their next notebook. Even more interesting, only 19% of early tech did so. I believe there are a couple of things at play here: either early tech adopters are quite happy to use their phone as a hotspot when they need connectivity, or they are actually just happy to use their phone for any of their on the go needs. It seems that, in this case, being tech-savvy is working against a feature that is being marketed as cutting edge. Where cellular connectivity resonates is with mainstream users (28% of which listed it in their top three features) and late adopters (20%). It seems to me that with these users, the marketing message around the PC being the same as your phone is working quite well.
The short-term opportunity, considering current owners in the market to upgrade their notebook within the next twelve month is not much more encouraging as only 25% of them picked cellular connectivity as a top three must have.
We also wanted to see if people who have a more flexible work set up in both hours and location might be better placed to appreciate such a feature but it does not seem that this is the case. Cellular was, in fact, only selected as a top three feature by 19% of panelists fitting that work style.
We say We want it but do We want to pay for It!
While the interest in cellular was not great, let’s dig a little deeper and understand what kind of premium potential buyers are willing to pay for the luxury of being connected any time any place.
We asked to imagine the following scenario: “you are purchasing your next notebook, and you have settled on a model when they tell you that it comes with a variant that offers 22 hours of continuous battery life and always-on internet connectivity. What premium (above the cost of the model without those features) would you be prepared to pay for it?”
Maybe conditioned by the current iPad offering that still puts a $100 premium on cellular or maybe because it is the sweet spot for this feature, 34% of the panelists would consider to pay up to $100 more. Seventeen percent would choose the cheaper model, and another 12% would expect those features to come as standard. This picture does not change much even among people who picked cellular connectivity among their top-three must-have features.
Where we find a considerable difference is in the willingness to pay a monthly fee for that cellular connectivity. Among consumers who are interested in cellular capability, only 19% said not to be interested in paying monthly, while among the overall panel, that number almost double to 39%.
When companies talk about PC connectivity and points to user behavior with smartphones as a parameter to determine demand and success potential, I think they are missing the mark. There are two major differences that play a big role in how consumers will interact with PC compared to their phones:
Smartphones are truly mobile, and PCs are nomadic. This is a big difference as it implies I might be using my phone while I walk or I am standing in a crowded train/bus, but I would never do that with a PC. When I use a PC I am sitting somewhere and in more cases than not that place will have Wi-Fi. This is certainly true in the US, but less so in Europe and Asia, which is why those markets offer better opportunities for cellular enabled PCs.
The other factor that I think is not considered enough is the much wider application pool we have to choose from on our smartphones compared to our PCs. On the smartphone it is not just about email and video, it is about social media, news, books, chat, gaming, and the list goes on. So in a way, there are more things I can do with my smartphones that I might want to do while on the go than I will ever be able to do on my PC. Sometimes having a larger screen is a disadvantage, not just in case of portability but privacy too.
Does Always-Connected Simply mean Always-on?
Maybe when we think of connectivity, we think more about power than cellular. From the crave for longer battery life transpiring from our data it sure seems that way. That is the feature we all agree on we want in our future notebook. Our panelists would even consider buying a PC with a processor they were not familiar with as long as it delivered on battery. 29% saying they would do so for a notebook delivering 14 and 16 hours, another 17% wanting 16 to 18 hours and another 17% wanting 18 to 20 hours. Early adopters are even more demanding with 35% wanting between 14 and 16 hours before they consider a processor brand they are not familiar with.
This is where the short term opportunity for Qualcomm and Microsoft and their always-connected PC really is. Among the panelists looking to upgrade in the next 12 months, a whopping 67% would consider a PC with an unfamiliar brand if it delivered between 14 and 20 hours of battery.
I live with modern technology and with the bleeding edge of technology in my home. But I don’t live with the modern Internet. What I mean by that is I don’t live with modern Internet speeds. Brace yourself when I tell you this but my average broadband speed at home is 3.5mbps. Yes, megabits per second. My home broadband speed is not that different than the average speeds of third world countries. In fact, several third world countries have better broadband than I do.
So easy to take for granted, yet impossible to ignore. I’m speaking, of course, of WiFi, the modern lifeblood of virtually all our tech devices. First introduced as a somewhat odd—it’s commonly believed to be short for Wireless Fidelity—marketing term in 1999, the wireless networking technology leverages the 802.11 technical standards—which first appeared in 1997.
Since then, WiFi has morphed and adapted through variations including 802.11b, a, g, n, ac, ad, and soon, ax and ay, among others, and has literally become as essential to all our connected devices as power. Along the way, we’ve become completely reliant on it, placing utility-like demands upon its presence and its performance.
Unfortunately, some of those demands have proven to be ill-placed as WiFi has yet to reach the ubiquity, and certainly not the consistency, of a true utility. As a result, WiFi has become the technology that some love to hate, despite the incredibly vital role it serves. To be fair, no one really hates WiFi—they just hate when it doesn’t work the way they want and expect it to.
Part of the challenge is that our expectations for WiFi continue to increase—not only in terms of availability, but speed, range, number of devices supported, and much more. Thankfully, a number of both component technology and product definition improvements to help bring WiFi closer to the completely reliable and utterly dependable technology we all want it to be have started to appear.
One of the most useful of these for most home users is a technology called WiFi mesh. First popularized by smaller companies like Eero nearly two years, then supported by Google in its home routers, WiFi mesh systems have become “the” hot technology for home WiFi networks. Products using the technology are now available from a wide variety of vendors including Netgear, Linksys, TP-Link, D-Link and more. These WiFi mesh systems consist of at least two (and often three) router-like boxes that all connect to one another, boosting the strength of the WiFi signal, and creating more efficient data paths for all your devices to connect to the Internet. Plus, they do so in a manner that’s significantly simpler to set up than range extenders and other devices that attempt to improve in-home WiFi. In fact, most of the new systems configure themselves automatically.
From a performance perspective, the improvements can be dramatic, as I recently learned firsthand. I’ve been living with about a 30 Mbps connection from the upstairs home office where I work down to the Comcast Xfinity home gateway providing my home’s internet connection, even though I’m paying for Comcast’s top-of-the-line package that theoretically offers download speeds of 250 Mbps. After I purchased and installed a three-piece Netgear Orbi system from my local Costco, my connection speed over the new Orbi WiFi network jumped by over 5x to about 160 Mbps—a dramatic improvement, all without changing a single setting on the Comcast box. Plus, I’ve found the connection to be much more solid and not subject to the kinds of random dropouts I would occasionally suffer through with the Xfinity gateway’s built-in WiFi router.
In addition, there were a few surprise benefits to the Netgear system that—though they may not be relevant for everyone—really sealed the deal for me. In another upstairs home office, there’s a desktop PC and an Ethernet-equipped printer, both of which had separate WiFi hardware. The PC used a USB-based WiFi adapter and the printer had a WiFi-to-Ethernet adapter. Each of the “satellite” routers in the Orbi system have four Ethernet ports supporting up to Gigabit speeds, allowing me to ditch those flaky WiFi adapters and plug both the PC and printer into a rock-solid, fast Ethernet connection on the Orbi. What a difference that made as well.
The technology used in the Netgear Orbi line is called a tri-band WiFi system because it leverages three simultaneously functioning 802.11 radios, one of which supports 802.11b/g/n at 2.4 GHz for dedicated connections with older WiFi devices, and two of which support 802.11a/n/ac at 5GHz. One of the 802.11ac-capable radios handles connection with new devices, and the other is used to connect with the other satellite routers and create the mesh network. The system also uses critical technologies like MU-MIMO (Multi-User, Multiple Input, Multiple Output) for leveraging several antennas, and data compression methods like 256 QAM (Quadrature Amplitude Modulation) to improve data throughput speeds.
Looking ahead in WiFi technology from a component perspective, we’ve started to see the introduction of pre-standard silicon for the forthcoming 802.11ax standard, which offers some nominal speed improvements over existing 802.11ac, but is more clearly targeted at improving WiFi reliability in dense environments, such as large events, tradeshows, meetings, etc. There’s also been some discussion about 802.11ay, which is expected to operate in the 60 GHz band for high speeds over short distances, similar to the current 802.11ad (formerly called WiGig) standard.
As with previous generations of WiFi, there will be chips from companies like Qualcomm that implement a pre-finalized version of 802.11ax for those who are eager to try the technology out, but compatibility could be limited, and it’s not entirely clear yet if devices that deploy them will be upgradable when the final spec does get released sometime in 2019.
The bottom line for all these technology and component improvements is that even at the dawn of the 5G age, WiFi is well positioned for a long, healthy future. Plus, even better, these advancements are helping the standard make strong progress toward the kind of true utility-like reliability and ubiquity for which we all long.
This week’s Tech.pinions podcast features Ben Bajarin, Carolina Milanesi and Bob O’Donnell discussing Apple’s HomePod smart speaker, the re-integration into Google of the Nest smart home products business, and the quarterly earnings for Twitter and Nvidia.