The Importance of Smart Speakers

One of the most important markets for the tech industry is the connected home. Connected thermostats, televisions, lights, appliances, security cameras, door locks, etc. have gained strong consumer interest around the world and is at the heart of making homes and even offices smarter.
I have been studying the connected home since 2002 and wrote one of the first reports on this idea in 2004 about having a home with devices connected to the Internet. In that report, I stated that while I saw the potential of connected devices, I believed that they would only gain real traction when they had more processing power behind them, better connectivity, and were controlled centrally.

In 2004, we did not have smartphones, smart speakers or even any standardized wireless protocols that could deliver on the concept of the smart home but I proposed that once they did have this capability, it would need some hub that served as a control center. The device I suggested that could serve as this hub should be the television. We were working with a semiconductor company who specialized in television processors at the time and had a vision for making the TV more intelligent. As I looked at their processor roadmap I could see a glimpse of where they could go with these chips to deliver a TV that could be connected to the Internet and putting two and two together, I surmised that a TV, with the right intelligence and connectivity, just might be able to serve as control hub of a connected home.

Fast forward to today and TV’s indeed have become smarter with greater levels of connectivity. While they are placed in most homes in a central location that could serve as a smart hub, none of the TV vendors have designed them to be connected smart home control centers. Interestingly, I believe that this was the thinking behind Steve’s Job’s desire to create a TV. While Jobs told his biographer Walter Isaacson that his vision for the TV was focused on user interfaces and ease of use, I believe his vision was more Trojan Horse in that it could also become a smart hub for a connected home. Now, this is pure conjecture on my part, but I’ve spent enough time following Steve Jobs and tracking his motives and could easily see that he had more in mind than just a better UI for a TV.

The idea of a hub that sits at the center of controlling a smart home is more relevant today than ever. While I thought that a TV was the most logical device to serve as that hub, it is clear now that a smart speaker has become the right device that can serve that purpose now. With a voice interface, connectivity and placed at the center of a home’s activity, such as a kitchen or a den, it is quickly becoming the best way to interact with, and control, the multitude of smart devices people are employing in their homes these days.

But let’s be clear, we are still at the early stages of making homes smart and the very early stages of making smart speakers in one form or another the primary hub that controls the smart home.

The chart below puts this fact into perspective.

As stated above, this chart only shows the market before Apple entered the smart speaker market. It also shows that there is a lot of room to grow this market in the US.

The chart below shows the pricing of these speakers and shows Apple has some real headwinds when it comes to the Homepod’s ability to gain market share in the smart speaker market. However, If Apple can sell a projected 5-6 million in calendar 2018, they could end the year making the most money and profit in this market segment.

As I have mentioned above, the real purpose of smart speakers, besides giving us information on demand, is really to serve as the control center of a smart home and this will become much more important as a plethora of smart devices flood homes around the world in the next few years.

In my home I have been using Amazon’s Echo and Echo Dots, the Google Mini and Apple’s New Homepod for getting information, playing music on demand, ordering stuff online and controlling lights and other connected devices around the house. By far the best device I have is Apple’s Homepod since it provides superior sound quality compared to the others I use and I have found Siri to be surprisingly quite accurate since I started using it a few weeks back. And as a hub, it works flawlessly with the made for Apple Home devices too. But so has the Echo and Mini when it comes to controlling connected devices in their respective eco-systems.

Given what I believe was Steve Jobs’ Trojan Horse thinking about the TV as a control hub and my belief that Apple has made the Homepod an extension of Jobs’ TV vision, I think Apple also needs to follow what Amazon and Google have done and created mini versions of their larger speaker models. The fact is that a hub of this type needs to reside in other places of the home, not just one central location like the kitchen or the den. In my home, the larger speakers are in the Kitchen, but I have the Google Mini’s and Echo Dot’s in our bedroom, my study, and even our master bathroom.

For Apple to make the Homepod a whole home hub they eventually need to do what I would call a Mini-Homepod, but with better quality speakers that are in the Echo Dot and the Google Mini. While the Google Mini and Echo Dot are $49.99, I would be willing to pay $99 for a Homepod mini if the speaker quality was at least four times better than what is on the Mini and Dot today.

I believe the battle to control the smart home will go through the smart speaker since it will serve as the central control system of the connected home. In that sense, it also becomes a real eco-system battle. But it starts with the quality and functionality of the smart speaker and the accuracy of its intelligence and controlling functions. The smart speaker is much more than an intelligent speaker. It is on track to become the central controller of the smart home and serve a much greater purpose than just being a speaker and smart agent.

The Blurring Lines of 5G

On the eve of the world’s largest trade show dedicated to all things telecom—Mobile World Congress (MWC), which will be held next week in beautiful Barcelona, Spain—everyone is extraordinarily focused on the next big industry transition: the move to 5G.

The interest and excitement about this critical new network standard is palpable. After years of hypothetical discussions, we’re finally starting to see practical test results, helped along by companies like National Instruments, being discussed, and realistic timelines being revealed by major chip suppliers like Qualcomm and Intel, phone makers like Samsung, network equipment providers like Ericsson, as well as the major carriers, such as AT&T and Verizon. To be clear, we won’t be seeing the unveiling of smartphones with 5G modems that we can actually purchase, and the mobile networks necessary to support them until around next year’s show—and even those will be more bleeding edge examples—but we’ve clearly moved past the “I’m pretty sure we’re going to make it” stage to the “let’s start making plans” stage. That’s a big step for everyone involved.

As with the transition from 2G to 3G and 3G to 4G, there’s no question that the move to 5G is also a big moment. These industry transitions only occur about once a decade, so they are important demarcations, particularly in an industry that moves as fast as the tech industry does.

The transition to 5G will not only bring faster network connection speeds—as most everyone expects—but also more reliable connections in a wider variety of places, particularly in dense urban environments. As connectivity has grown to be so crucial for so many devices, the need for consistent connections is arguably even more important than faster speeds, and that consistency is one of the key promises that we’re expecting to see from 5G.

In addition, the range of devices that are expected to be connected to 5G networks is also growing wider. Automobiles, in particular, are going to be a critical part of 5G networks in the next decade, especially as more assisted, semi-autonomous and (eventually) fully autonomous cars start relying on critical connections between cars and with other road-related infrastructure to help them function more safely.

As exciting as these developments promise to be, however, it’s also becoming increasingly clear that the switchover from 4G to 5G will be far from a clean, distinct break. In fact, 5G networks will still be very dependent on 4G network infrastructure—not only in the early days when 5G coverage will be limited and 4G will be an essential fallback option—but even well into the middle of the 2020s and likely beyond.

A lot of tremendous work has been done to build a robust 4G LTE network around the world and the 5G network designers have wisely chosen to leverage this existing work as they transition to next generation standards. In fact, ironically, just before the big 5G blowout that most are expecting at this year’s MWC trade show, we’re seeing some big announcements around 4G.

The latest modem from Qualcomm—the X24, for example, isn’t a 5G model (though they’re previously announced X50 modem is expected to be the first commercial modem that will comply with the recently ratified 5G NR “New Radio” standard), but rather a further refinement of 4G. Offering theoretical download speeds of up to 2 Gbps thanks to 7X carrier aggregation—a technology that allows multiple chunks of radio bandwidth to function as a single data “pipe”—the X24 may likely, in fact, offer even faster connection speeds than early 5G networks will enable. In theory, first generation 5G networks should go up to 4 Gbps and even higher, but thanks to the complexities of network infrastructure and other practical realities, real-world numbers are likely to be well below that in early incarnations.

Of course, this is nothing new. In other major network transitions, we saw relatively similar phenomena, where the last refinements to the old network standards were actually a bit better than the first iterations of the new ones.

In addition, a great deal of additional device connectivity will likely be on networks other than official 5G for some time. Standards like NB-IoT and Cat M1 for Internet of Things (IoT) applications actually ride on 4G LTE networks, and there’s little need (nor any serious standards work being done yet) to bring these over to 5G. Even in automotive, though momentum is rapidly changing, the “official” standard for vehicle-to-vehicle (V2V) connections in the US is still DSRC, and the first cars with it embedded into them just came out this year. DSRC is a nearly 20-year old technology, however, and was designed well before the idea of autonomous cars became more of a reality. As a result, it isn’t likely to last as the standard much longer given the dramatically increased network connectivity demands that even semi-autonomous automobiles will create. Still, it highlights yet another example of the challenges to evolve to a truly 5G world.

There is no question that 5G is coming and that it will be impactful. However, it’s important to remember that the lines separating current and next generation telecom network standards are a lot blurrier than they may first appear.

Podcast: WiFi Mesh, Qualcomm X24 Modem, Arm Trillium AI Chips, AMD Zen Deskop APUs

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing WiFi mesh home routers, Qualcomm’s new 2 Gbps X24 LTE Modem, Arm’s Trillium AI chips, and AMD’s new Zen desktop APUs.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Buzz Around Device as a Service Continues to Grow

This week Device as a Service (DaaS) pioneer HP announced it was expanding its hardware lineup. In addition to adding HP virtual reality products including workstations and headsets, the company also announced it would begin offering Apple iPhones, iPads, and Macs to its customers. It’s a bold move that reflects the intense and growing interest in this space, as well as Apple’s increasingly prominent role on the commercial side of the industry.

First Came PCaaS
IDC’s early research on PC as a Service (PCaaS) showed the immense potential around this model. It’s exciting because it is a win/win for all involved. For companies, shifting to the as a service model means no longer having to budget for giant capital outlays around hardware refreshes. As IT budgets have tightened, and companies have moved to address new challenges and opportunities around mobile, cloud, and security, device refreshes have often extended out beyond what’s reasonable. Old PCs limit productivity and represent ongoing security threats, but that’s not stopped many companies from keeping them in service for five years and more.

PCaaS lets companies pay an ongoing monthly fee that builds in a more reasonable life cycle. That fee can also include a long list of deployment and management services. In other words, companies can offload the day-to-day management of the PC from their IT to a third party. And embedded within these services are the ability of the provider to capture analytics that helps guide future hardware deployments and ensure security compliance.

PC vendors and other service providers offering PCaaS like it because it allows them to capture more services revenue, shorten product lifecycles, and smooth out the challenges associated with the historical ebb and flow of big hardware refreshes often linked to an operating system’s end of life. HP was the first major PC vendor to do a broad public push into the PCaaS space, leveraging what the company learned from its managed print services group. Lenovo has been dabbling in the space for some time but has recently become more public about its plans here. And Dell has moved aggressively into the space in the last year, announcing its intentions at the 2017 DellWorld conference. Each of the three major PC vendors brings its own set of strengths to the table in this competitive market.

Moving to DaaS
HP’s announcement about offering more than just PCs, as well as Apple devices, is important for several reasons. Chief among them is that in many markets, including the U.S. (where this is launching first), iOS has already established itself as the preferred platform in many companies. By acknowledging this, HP quickly makes its DaaS service much more interesting to companies who have shown an interest in this model, but who were reluctant to do so if it only included PCs. Second, while HP has a solid tablet business, it doesn’t have a viable phone offering today. For many companies, this would be an insurmountable blocker, but to HP’s credit, it owned this issue and went out and found the solution in Apple. It will be interesting to see if the other PC vendors eventually announce similar partnerships with age-old competitors. It’s worth noting that Dell also doesn’t have a phone offering, while Lenovo does have a phone business that includes the Moto brand.

It was also very heartening to see HP announce it would begin offering its virtual reality hardware as a service, too. Today that means the HP Z4 Workstation and the HP Windows Mixed Reality VR headset, but over time I would expect that selection to grow. As I’ve noted before, there is strong interest from companies in commercial VR. By offering the building blocks As A Service, HP enables companies to embrace this new technology without a massive capital outlay up front. I would expect to see both Dell and Lenovo, which also have VR products, to do the same in time. And while VR represents a clear near-term opportunity, Augmented Reality represents a much larger commercial opportunity long term. There’s good reason to believe that many companies will turn to AR as a Service as the primary way to deploy this technology in the future. And beyond endpoint devices such as PCs, tablets, phones, and headsets, it is reasonable to expect that over time more companies will look to leverage the As A Service model for items such as servers and storage, too.

Today just a small percentage of commercial shipments of PCs go out as part of As a Service agreement, but I expect that to ramp quickly in the next few years. The addition of phones, tablets, AR/VR headsets, and other hardware will help accelerate this shift as more companies warm to the idea. That said, this type of change doesn’t come easily within all companies, and there will likely continue to be substantial resistance inside many of them. Much of this resistance will come from IT departments who find this shift threatening. The best companies, however, will transition these IT workers away from the day-to-day grind of deployment and management of devices to higher-priority IT initiatives such as company-wide digital transformation.

At IDC we’re about to launch a new research initiative around Device as a Service, including multiple regional surveys and updated forecasts. We’ll be closely watching this shift, monitoring what works, and calling out the areas that need further refinement. Things are about to get very interesting in the DaaS space.

Is Qualcomm’s Connected PC a threat to Intel?

I have been spending a lot of time with clients and people in the industry lately about Qualcomm and Microsoft’s push to create an ARM-based platform for Windows-based laptops. Although these two companies launched a Windows on ARM program four years ago, that initiative failed due to the underpowered ARM processor available at that time and the version of Windows used on these laptops that did not work well on these early ARM based laptops.

The new program launched by Qualcomm and Microsoft in early December is called the Connected PC initiative and uses Qualcomm’s 835 and 845 Snapdragon processors along with a new version of Windows 10S that is optimized for use on these ARM-based laptops. As I have written recently, while the connected portion of this program is interesting, our research shows that actual demand for connectivity in a notebook was #6 on a list of preferred features when we surveyed people who were interested in buying new laptops.

Number #1 on this list was battery life. The good news for Qualcomm and Microsoft is that their Connected PC program also stresses long battery life and Qualcomm expects that laptops using the Snapdragon 835 and 845 will provide at least 22 hours of continuous use. My own belief is that instead of touting Always Connected and Always On as their tagline they should reverse the order and Always On and Always connected alternatively.

I also see this push towards all-day computing to be a potential game-changer for the PC industry and, if done right, could spur new demand for laptop refresh rates that could last at least three years.

While this push for all-day computing looks to be important to Qualcomm and Microsoft, it should be equally important to Intel if the concept of an all-day laptop has the potential to drive a lot of new laptops sales in the future. But here lies the big question. Intel’s PC processors have never really been optimized for long battery life because performance has been at the heart of their CPU mantra. Indeed, as semiconductor mfg processes have gone from 22 nm to Intel’s current 10 nm process, better battery life and more power do happen. But performance has always topped battery life in their strategy.

The processor Intel would be pushing into this all day computing genre most likely will be Lakefield and its future iterations. This is a very important mobile chip for Intel in that while delivering solid performance; its second goal is to deliver longer battery life.
But here lies the billion-dollar question for Intel and the industry. Can Intel compete at the long battery life level with Qualcomm’s 835 and 845 processor that they state can deliver at least 22 hours of continuous use and Qualcomm belief’s that with new chips in the works, could get to well over 30 hours by mid-2019?

Sources I have talked to who have tested the current Lakefield processor say at best it can get perhaps 18-20 hours of continuous use if the conditions are just right. But when I asked these sources if they believe Intel could ever get the kind of battery life like Qualcomm can deliver, they say they doubt it. Of course, Intel would argue that they will still have the edge in performance and might even say that in a real world people plug their laptops in overnight and nobody uses a PC for 22+ hours continually.

While Intel’s argument has merit, their ability to compete in the all-day computing thrust that I believe will jumpstart a new refresh cycle for the PC industry will depend on the answers to these following questions-

1.How much battery life does a user want? Is 18 hours enough, 22 hours enough, etc.?

2-Will Qualcomm’s long life processor has enough power to meet users basic computing needs, i.e., video, web browsing, etc.? Or do they need more processing power to handle advanced graphics, extensive numerical calculations, etc.?

3-Will a 22-30+ hour battery life in a laptop be enough to cause people to want to refresh their older models that in most cases get less than 10 hours of battery life today? Is the prospect of heading off for the day and not thinking about even needing to carry a charging cable appealing enough to get people to start replacing their current laptops in large numbers?

4-How will long-life batteries in laptops influence design?

5-Microsoft’s significant endorsement of Qualcomm’s Windows on ARM speaks volumes about what seems to be a processor-neutral position Microsoft is taking when it comes to CPU support. Intel is where it is today in PCs because of Microsoft. Could Qualcomm ride this support from Microsoft and the Always On, Always Connected initiative to become a serious threat to Intel’s PC future?

These and other critical questions about the future of laptops are part of our research, and as we get answers to these questions, we will report back. My take is that Intel will be challenged by this Microsoft and Qualcomm initiative, and they could see their dominant role as the main CPU suppliers be challenged by Qualcomm shortly.

New AMD chip takes on Intel with better graphics

Nearly a full year after the company started revamping its entire processor lineup to catch up with Intel, AMD has finally released a chip that can address one of the largest available markets. Processors with integrated graphics ship in the majority of PCs and notebooks around the world, but the company’s first Ryzen processors released in 2017 did not include graphics technology.

Information from Jon Peddie Research indicates that 267 million units of processors with integrated or embedded graphics technology were shipped in Q3 2017 alone. The new AMD part that goes on sale today in systems and the retail channel allows AMD an attempt to cut into Intel’s significant market leadership in this segment, replacing a nearly 2-year-old product.

Today AMD stands at just 5% market share in the desktop PC space with integrated graphics processors, a number that AMD CEO Lisa Su believes can grow with this newest Ryzen CPU.

Early reviews indicate that the AMD integrated graphics chips are vastly superior to the Intel counterparts when it comes to graphics and gaming workloads and are competitive in standard everyday computing tasks. Testing we ran that was published over at PC Perspective shows that when playing modern games at mainstream resolutions and settings (720p to 1080p depending on the specific title in question), the Ryzen 5 2400G is as much as 3x faster than the Core i5-8400 from Intel when using integrated processor graphics exclusively. This isn’t a minor performance delta and is the difference between having a system that is actually usable for gaming and one that isn’t.

The performance leadership in gaming means AMD processors are more likely to be used in mainstream and small form factor gaming PCs and should grab share in expanding markets.

China and India, both regions that are sensitive to cost, power consumption, and physical system size, will find the AMD Ryzen processor with the updated graphics chip on-board compelling. AMD offers much higher gaming performance using the same power and at a lower price. Intel systems that want to compete with the performance AMD’s new chip offers will need to add a separate graphics card from AMD or NVIDIA, increasing both cost and complexity of the design.

Though Intel is the obvious target of this new product release, NVIDIA and AMD (ironically) could also see impact as sales of low-cost discrete graphics chips won’t be necessary for systems that use the new AMD processor. This will only affect the very bottom of the consumer product stack though, leaving the high-end of the market alone, where NVIDIA enjoys much higher margins and market share.

The GT 1030 from NVIDIA and the Radeon RX 550 from AMD are both faster in gaming than the new Ryzen processor with integrated graphics, but the differences are in an area where consumers in this space are like to see it as a wash. Adding to the story is the fact that the Ryzen processor is cheaper, draws less power, and puts fewer requirements on the rest of the system (lower cost power supply, small chassis).

AMD’s biggest hurdle now might be to overcome the perception of integrated processor graphics and the stigma it has in the market. DIY consumers continue to believe that all integrated graphics is bad, a position made prominent by the lack of upgrades and improvements from Intel over the years. Users are going to need to see proof (from reviews and other users) to buy into the work that AMD has put into this product. Even system integrators and OEMs that often live off the additional profit margin of upgrades to base system builds (of which discrete graphics additions are a big part) will push back on the value that AMD provides.

AMD has built an excellent and unique processor for the mainstream consumer and enterprise markets that places the company in a fight that it been absent from for the last several generations. Success here will be measured not just by channel sales but also how much inroad it can make in the larger consumer and SMB pre-built space. Messaging and marketing the value of having vastly superior processor graphics is the hurdle leadership needs to tackle out the gate.

Apple Watch’s Big Quarter and a Series of Firsts

It is no secret that I’ve been very bullish on Apple Watch since day one. I’ve held my ground against the naysayers and defended this product because I believed in it and the broader role it can and will play in the future of computing. After a rough second year, when many of the naysayers thought they were right, Apple Watch is truly gaining steam.

I love this headline even though it is wrong: “Apple and Android are destroying the Swiss Watch Industry.”

Notice the trend line of Swiss Watches. It’s not dropping off dramatically but rather has stayed consistent. That being said, that market is not getting bigger and it does run the risk of being disrupted by smartwatches. So it is relevant that one of Apple Watch’s firsts is that it outsold the entirety of the luxury Swiss brand’s volume. That being said, any impact to the Swiss market is coming from Apple not Apple and Android as the headline suggests. Android smartwatches were a tiny sliver of the overall smartwatch sales likely SHIPPING much more than a million units in my estimates. I’ve seen a few go as high as 1.5 million but I believe that to be a generous estimate.

Another key first for Apple Watch in the wearable market narrative is that Apple overtook Fitbit in annual sales of wrist-worn technology for the first time.

While we don’t have data yet to fully validate this, my original theory was that Fitbit would serve as a feeder to Apple Watch. Meaning a consumer who just wanted to get a low-cost health/fitness tracker would start with a Fitbit because of brand, cost, etc. Over time that consumer would realize they found value in getting health/fitness data and would then graduate to an Apple Watch. In some very early survey work I did with Wristly, we did find pockets of data suggesting consumers were replacing their Fitbit’s with Apple Watch, but I hope to quantify this on a larger level when we do our next wearable study.

Regardless of how Apple Watch got here, after three years, it is finding its place in the market. The challenge Apple Watch faced in its first years were people expecting it to launch out of the gate and grow with a steep S-Curve sort of like iPad did. I knew that would not be the case and to all those who called it a failure because its sales didn’t ramp like expected I tried to caution the adoption cycle of a product like Apple Watch would be a slow ramp and that is exactly what we see happening.

When it comes to the future for Apple, and the industry holistically, wearables are the path to what’s next. Which I continue to maintain is where Apple’s work in miniaturization remains cutting edge and critically important. People want to talk about AR glasses but, honestly, we will wear computers on wrists and ears way before we wear them on our face in mass.

But as I pointed out yesterday, voice assistants will become one of the primary ways we interact with these computers on your wrists, ears, and eventually our faces. This is why the further development of miniaturized computers coupled with the integration of smart, personalized, and contexturally aware voice assistant/AI systems are critical innovations to keep our eyes on.

While the health benefits of Apple Watch will continue to be a major draw, it is encouraging that consumers are grasping the wider value after their initial purchase. Communication features both in notifications and responding to texts calls are all things we see growing in value with Apple Watch owners. Many of the same leading indicators of data points we tracked very early on that gave me hope in the Apple Watch as a product are still true today and in some cases stronger than they were in 2015. For that reason, I’m still convinced in the upside for Apple Watch to grow and evolve and move the broader industry forward. Apple is likely to remain the biggest beneficiary here and we will see if the rest of the industry can catch up at some point.

What’s at Stake in the Voice Assistant Race

It is interesting that Apple’s HomePod has ignited a broader philosophical debate, within the tech industry and industry pundits and observers, around what is really at stake with voice assistants in the future. Everyone has an opinion on this, and there is a degree of implications for the future to be thought about as well.

It is worth making a high-level point right off the bat. In many discussions I’ve had behind closed doors on this subject, it seems folks like to explore whether voice assistants are disruptive. I dislike this word because it is misused more often than it is correctly used or understood. But the question I hear specifically is whether these voice assistants are disruptive to Apple mainly. The answer is no, and I’ll briefly explain why.

At its core, disruption of an incumbent impacts its core business model by way of losing its customers. For Apple to be disrupted by something, that something has to lead to Apple losing customers who are not buying their hardware. There is no guarantee Apple is not going to be disrupted in some way like this someday, but I’m confident it is nowhere near the immediate or relatively distant future. What I don’t think is at stake with these voice assistants is Apple’s disruption, but I do think what’s at stake is Apple’s sharing of their highly valuable customer base with other companies.

Now it is clear; Apple doesn’t have too much problem sharing their customer base with the likes of Amazon, Netflix, even Google to some degree since they are not closed off from others in their ecosystem. This is glaringly evident as Apple CEO Tim Cook shares each quarter how Apple is growing the number of subscriptions their platform is driving. At Apple’s annual shareholder meeting yesterday Cook said Apple “holds” almost a quarter of a billion subscriptions for both its services and other apps. In his own words: “We’ve built a muscle in how to do that. I think that will be good for Apple in the future as well.” I think this is an incredibly important statement.

Apple has the largest, most profitable base of consumers on the planet. This is evident by the 1.3 billion devices in use and their increased growth in services which includes subscriptions. I have no doubt Apple wants to increase the number of subscribers on their first party services like Apple Music, iCloud, and eventually a TV offering which is likely. But for that to happen their services need to be competitive with third-party services. But for Apple, the question will be which first-party services they feel are core and areas where they can make a real difference and have a competitive advantage.

In that vein, the question of Siri arises vs. the competition because Siri is a service. Looking to the future, I believe Siri is an important platform for Apple as a part (not the entirety) of the human interface of the future. Voice will be one of the primary ways we interact with computers at some point down the road. Therefore, Siri is essentially an important UI strategy for Apple. It’s as important as iOS is to Apple today. Google knows this which is why they are full throttle on Google Assistant. Amazon knows this which is why they are tripling down on Alexa. So what is stake for Apple in this future is how much they want to share the voice assistant their customers use with others besides themselves. Does Apple want their customers to use Siri 100% of the time? 70%? 50%? This is the philosophical conversation long term.

When I made the analogy of Apple being willing to share their customers with others like Amazon, Netflix, etc., via apps on their platform I’m not sure it is the same kind of sharing in the voice assistant realm. Voice assistants, I believe, will offer huge levels of convenience we can’t even imagine today. And if I’m right they are the basis of a new operating system in the post-smartphone world, then their importance today in generating dependence and relationships with customers is of strategic importance.

This is the reason I bring up the importance of a product like HomePod for Apple. It isn’t the product itself but rather that HomePod represents another way for consumers to interact with Siri in the home. If Siri is important to Apple, then it is in Apple’s strategic interest to give their customers as many opportunities to interact with Siri in whatever way they want and feel comfortable. The one thing we learned loud and clear with voice speakers like Echo and Google Home is how comfortable consumers are speaking to these products in their home. More comfortable in fact speaking to them then their smartphone, PC, or smartwatch.

Apple is at risk of having to share too much of their consumers with competitors like Amazon and Google in the ambient computer platform race, and while they may think this isn’t a big deal right now, I tend to think it could have broader strategic implications if they are not careful.

Do We Really want an Always Connected PC?

I suppose, first, we should ask if we want a PC at all! Our recent US study run across 1262 consumers says we do. Less than one percent of the panelists said they have no intention to buy another PC or Mac. As a matter of fact, twenty-five percent of the panel is in the market to buy a new PC or Mac in the next twelve months.

What do We want When buying a Notebook?

Well, it depends who you are!

No matter which brand of PC we own, or how savvy of a user we are, when it comes to notebooks there is one thing we want out of the next computer we are buying: a longer battery life. Fifty-nine percent of our panelists picked battery life as a must-have feature in their next PC – one third more than for any other feature.

The other top two features are a little different depending on the camp you are in. While not strictly a feature, brand comes as the second most important consideration point for Mac buyers, which I am sure is not a surprise as with brand you buy into a whole entire ecosystem of both software and devices. Outside of Apple users, current PC owners only view brand as the sixth most important feature which causes some interesting challenges to the many brands in the Windows ecosystem trying to establish themselves in the high-end. Going back to hardware, what comes after battery very much depends on the kind of user you are. For early adopters a higher resolution display matters (34%) but for everybody else, including Mac owners, it is about more memory.

So where is connectivity in the list of features for our next notebook? Not much of a priority it seems.

Only 23% of our panel picked cellular connectivity as one of the top three features they want in their next notebook. Even more interesting, only 19% of early tech did so. I believe there are a couple of things at play here: either early tech adopters are quite happy to use their phone as a hotspot when they need connectivity, or they are actually just happy to use their phone for any of their on the go needs. It seems that, in this case, being tech-savvy is working against a feature that is being marketed as cutting edge. Where cellular connectivity resonates is with mainstream users (28% of which listed it in their top three features) and late adopters (20%). It seems to me that with these users, the marketing message around the PC being the same as your phone is working quite well.

The short-term opportunity, considering current owners in the market to upgrade their notebook within the next twelve month is not much more encouraging as only 25% of them picked cellular connectivity as a top three must have.

We also wanted to see if people who have a more flexible work set up in both hours and location might be better placed to appreciate such a feature but it does not seem that this is the case. Cellular was, in fact, only selected as a top three feature by 19% of panelists fitting that work style.

We say We want it but do We want to pay for It!

While the interest in cellular was not great, let’s dig a little deeper and understand what kind of premium potential buyers are willing to pay for the luxury of being connected any time any place.

We asked to imagine the following scenario: “you are purchasing your next notebook, and you have settled on a model when they tell you that it comes with a variant that offers 22 hours of continuous battery life and always-on internet connectivity. What premium (above the cost of the model without those features) would you be prepared to pay for it?”

Maybe conditioned by the current iPad offering that still puts a $100 premium on cellular or maybe because it is the sweet spot for this feature, 34% of the panelists would consider to pay up to $100 more. Seventeen percent would choose the cheaper model, and another 12% would expect those features to come as standard. This picture does not change much even among people who picked cellular connectivity among their top-three must-have features.

Where we find a considerable difference is in the willingness to pay a monthly fee for that cellular connectivity. Among consumers who are interested in cellular capability, only 19% said not to be interested in paying monthly, while among the overall panel, that number almost double to 39%.

When companies talk about PC connectivity and points to user behavior with smartphones as a parameter to determine demand and success potential, I think they are missing the mark. There are two major differences that play a big role in how consumers will interact with PC compared to their phones:

  •  Smartphones are truly mobile, and PCs are nomadic. This is a big difference as it implies I might be using my phone while I walk or I am standing in a crowded train/bus, but I would never do that with a PC. When I use a PC I am sitting somewhere and in more cases than not that place will have Wi-Fi. This is certainly true in the US, but less so in Europe and Asia, which is why those markets offer better opportunities for cellular enabled PCs.
  • The other factor that I think is not considered enough is the much wider application pool we have to choose from on our smartphones compared to our PCs. On the smartphone it is not just about email and video, it is about social media, news, books, chat, gaming, and the list goes on. So in a way, there are more things I can do with my smartphones that I might want to do while on the go than I will ever be able to do on my PC. Sometimes having a larger screen is a disadvantage, not just in case of portability but privacy too.

Does Always-Connected Simply mean Always-on?

Maybe when we think of connectivity, we think more about power than cellular. From the crave for longer battery life transpiring from our data it sure seems that way. That is the feature we all agree on we want in our future notebook. Our panelists would even consider buying a PC with a processor they were not familiar with as long as it delivered on battery. 29% saying they would do so for a notebook delivering 14 and 16 hours, another 17% wanting 16 to 18 hours and another 17% wanting 18 to 20 hours. Early adopters are even more demanding with 35% wanting between 14 and 16 hours before they consider a processor brand they are not familiar with.

This is where the short term opportunity for Qualcomm and Microsoft and their always-connected PC really is.  Among the panelists looking to upgrade in the next 12 months, a whopping 67% would consider a PC with an unfamiliar brand if it delivered between 14 and 20 hours of battery.

Help Me 5G You’re My Only Hope

I live with modern technology and with the bleeding edge of technology in my home. But I don’t live with the modern Internet. What I mean by that is I don’t live with modern Internet speeds. Brace yourself when I tell you this, but my average broadband speed at home is 3.5mbps. Yes, megabits per second. My home broadband speed is not that different than the average speeds of third world countries. In fact, several third world countries have better broadband than I do.

I fall into the last mile problem we have here in the United States where estimates indicate approximately 15-20% of America lives in rural areas where broadband infrastructure is lacking. I live in a rural part of Silicon Valley that happens to be one of the last few unincorporated counties in the state of California. The price I pay for peace, quiet, land, and space is crap broadband.

For me, satellite Internet is not an option due to the latency problem that comes with satellite Internet. Some folks in our neighborhood can tolerate satellite latency but I can’t. Instead, a local internet company provides ISDN line of sight Internet access, but speeds go no faster than 10mbs per second. A double whammy of my predicament is the cost of our Internet. For that 5mbs speed, I pay 135 dollars a month. It’s a wonder anyone out here in the sticks has Internet at all between the price gouging and slow speeds. My neighbor works for Cisco and works from home and lives on video conferences all day. I don’t know how she does it with 10mbs and doesn’t go insane.

I’ve looked at every option, including the wireless home broadband offerings from carriers which still aren’t too much better. See this price to MBPS ratio from Verizon’s home LTE broadband service.

At some point, 5G will be my only hope. I’m just afraid it is still a long way off. The hope is 5G will expand the existing broadband infrastructure to allow more people to be connected to each access point and support higher speeds per connection that exist today. The other hope is the way new network technology for 5G can be developed is that it will not cost as much as past next-generation networks meaning costs will hopefully not increase but instead decrease. I have no idea if it will happen this way, but from discussions, I’ve had with the networks on this issue is conceptually I should be able to get 5G speeds at home within the same price range I pay today for 5mbps. Fingers crossed!

I bring this up to make a broader point about what the carriers goals with 5G will be and how that fits into the increasing connected device landscape. At some point you the consumer may have upward for five connected devices just to yourself. For example, your smartphone, smartwatch, PC or tablet, automobile, earphones, smart glasses (someday), may all have modems built into them. The barrier to connecting these devices to the Internet will be the additional cost per device. If we have to pay $10-$20 per month per connection consumers simply will not do it.

Instead, carriers will move to *unlimited plans. I add the asterisk because while these plans will be unlimited from a data perspective, they will not be unlimited from a speed standpoint. You will pay a set price for a tier of data. For example, $100 per month for 30gbs of data usage at peak speeds. Once you use up that 30gbs of data you don’t pay any more money, but your speeds slow down to 3-5mbps. In this scenario, carriers will want you to bring all your connected devices into this plan. So if you have a connected car, watch, earphones, PC/tablet, etc., you can bundle them into this plan meaning you still pay only one price total per month.

In this scenario, which is the one I’m told is most likely, the pain you will incur as a customer is not massively increasing prices when you go over your allotted amount of data. Instead, the pain will be that of slow Internet speeds. Honestly, I’m not sure which is worse.

The more I think about the connected device landscape trends, and how user behavior seems to consistently outpace innovation in broadband, I’m convinced we are decades away from having enough broadband for every person on the planet. 5G will hopefully bring us one step closer to consolidating all our connected devices at a reasonable price/bundle for consumers AND bring rural customers modern day broadband speeds. We expect 5G around the 2020 timeframe and hopefully carriers will stay on track.

The Modern State of WiFi

So easy to take for granted, yet impossible to ignore. I’m speaking, of course, of WiFi, the modern lifeblood of virtually all our tech devices. First introduced as a somewhat odd—it’s commonly believed to be short for Wireless Fidelity—marketing term in 1999, the wireless networking technology leverages the 802.11 technical standards—which first appeared in 1997.

Since then, WiFi has morphed and adapted through variations including 802.11b, a, g, n, ac, ad, and soon, ax and ay, among others, and has literally become as essential to all our connected devices as power. Along the way, we’ve become completely reliant on it, placing utility-like demands upon its presence and its performance.

Unfortunately, some of those demands have proven to be ill-placed as WiFi has yet to reach the ubiquity, and certainly not the consistency, of a true utility. As a result, WiFi has become the technology that some love to hate, despite the incredibly vital role it serves. To be fair, no one really hates WiFi—they just hate when it doesn’t work the way they want and expect it to.

Part of the challenge is that our expectations for WiFi continue to increase—not only in terms of availability, but speed, range, number of devices supported, and much more. Thankfully, a number of both component technology and product definition improvements to help bring WiFi closer to the completely reliable and utterly dependable technology we all want it to be have started to appear.

One of the most useful of these for most home users is a technology called WiFi mesh. First popularized by smaller companies like Eero nearly two years, then supported by Google in its home routers, WiFi mesh systems have become “the” hot technology for home WiFi networks. Products using the technology are now available from a wide variety of vendors including Netgear, Linksys, TP-Link, D-Link and more. These WiFi mesh systems consist of at least two (and often three) router-like boxes that all connect to one another, boosting the strength of the WiFi signal, and creating more efficient data paths for all your devices to connect to the Internet. Plus, they do so in a manner that’s significantly simpler to set up than range extenders and other devices that attempt to improve in-home WiFi. In fact, most of the new systems configure themselves automatically.

From a performance perspective, the improvements can be dramatic, as I recently learned firsthand. I’ve been living with about a 30 Mbps connection from the upstairs home office where I work down to the Comcast Xfinity home gateway providing my home’s internet connection, even though I’m paying for Comcast’s top-of-the-line package that theoretically offers download speeds of 250 Mbps. After I purchased and installed a three-piece Netgear Orbi system from my local Costco, my connection speed over the new Orbi WiFi network jumped by over 5x to about 160 Mbps—a dramatic improvement, all without changing a single setting on the Comcast box. Plus, I’ve found the connection to be much more solid and not subject to the kinds of random dropouts I would occasionally suffer through with the Xfinity gateway’s built-in WiFi router.

In addition, there were a few surprise benefits to the Netgear system that—though they may not be relevant for everyone—really sealed the deal for me. In another upstairs home office, there’s a desktop PC and an Ethernet-equipped printer, both of which had separate WiFi hardware. The PC used a USB-based WiFi adapter and the printer had a WiFi-to-Ethernet adapter. Each of the “satellite” routers in the Orbi system have four Ethernet ports supporting up to Gigabit speeds, allowing me to ditch those flaky WiFi adapters and plug both the PC and printer into a rock-solid, fast Ethernet connection on the Orbi. What a difference that made as well.

The technology used in the Netgear Orbi line is called a tri-band WiFi system because it leverages three simultaneously functioning 802.11 radios, one of which supports 802.11b/g/n at 2.4 GHz for dedicated connections with older WiFi devices, and two of which support 802.11a/n/ac at 5GHz. One of the 802.11ac-capable radios handles connection with new devices, and the other is used to connect with the other satellite routers and create the mesh network. The system also uses critical technologies like MU-MIMO (Multi-User, Multiple Input, Multiple Output) for leveraging several antennas, and data compression methods like 256 QAM (Quadrature Amplitude Modulation) to improve data throughput speeds.

Looking ahead in WiFi technology from a component perspective, we’ve started to see the introduction of pre-standard silicon for the forthcoming 802.11ax standard, which offers some nominal speed improvements over existing 802.11ac, but is more clearly targeted at improving WiFi reliability in dense environments, such as large events, tradeshows, meetings, etc. There’s also been some discussion about 802.11ay, which is expected to operate in the 60 GHz band for high speeds over short distances, similar to the current 802.11ad (formerly called WiGig) standard.

As with previous generations of WiFi, there will be chips from companies like Qualcomm that implement a pre-finalized version of 802.11ax for those who are eager to try the technology out, but compatibility could be limited, and it’s not entirely clear yet if devices that deploy them will be upgradable when the final spec does get released sometime in 2019.

The bottom line for all these technology and component improvements is that even at the dawn of the 5G age, WiFi is well positioned for a long, healthy future. Plus, even better, these advancements are helping the standard make strong progress toward the kind of true utility-like reliability and ubiquity for which we all long.

Podcast: Apple HomePod, Google-Nest Integration, Twitter and Nvidia Earnings

This week’s Tech.pinions podcast features Ben Bajarin, Carolina Milanesi and Bob O’Donnell discussing Apple’s HomePod smart speaker, the re-integration into Google of the Nest smart home products business, and the quarterly earnings for Twitter and Nvidia.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Learnings from Qualcomm’s ‘5G Day’

This past week, Qualcomm hosted analysts and trade press for a ‘5G Day’, where they charted their progress on 5G and announced 18 operator and 19 OEM commitments to their X50 5G chipset. But in addition to the major news, this presents a good opportunity to reflect on the state of 5G, particularly on the eve of Mobile World Congress, where I’m told you’re admitted unless you pledge to say the words ‘5G’ at 50 times a day.

So here are my top takeaways:

  1. 5G is on track. If only our government could pass a budget or we could get going on repairing our infrastructure with this level of urgency! Qualcomm’s announcements about modem availability and OEM/operator agreements (plus expected news from the network equipment folks at MWC) tell us that we will large scale trials and some initial mobile 5G services (i.e. not just fixed wireless) launch in late 2018, with more widespread launches and commercial device availability in 2019. This initial phase will be ‘non standalone’, which means a simultaneous connection to both LTE and 5G. We’ll see a handful of cities turned up initially, and within those cities, swiss-cheesy type coverage and on a limited number of devices. This will be on a combination of sub- 6 GHz and mmWave spectrum, depending on the operator (but tending toward sub-s6 GHZ initially).
  2. The Level of Technical Accomplishment is Impressive. A few of the highlights:
  • Smaller than expected form factor in the X50 chip, which will help from the standpoint of power efficiency
  • Carrier aggregation of up to 8 channels in the X50 chip. That’s what will allow us to get to GB or better service, if the operators have the spectrum and open up the floodgates.
  • Real world demos that showed download 4 Gbps speeds or better, latency below 5 milliseconds (ms), and in a pleasant surprise, upload speeds of up to 360 Mbps (a dramatic improvement over today’s LTE).
  • Important advances in antennas, beam forming capabilities, and spatial multiplexing. This is reflected in the improvements in latency and performance at the cell edge
  • The sheer complexity of RF systems and the number of bands and band combinations that have to be successfully supported
  1. Millimeter Wave Remains a Bit of a Wildcard. My impression is that there is still a lot that is still being figured out about how to design devices given the finickiness of mmWave signals. For example, the signal degrades much more quickly if your hand is covering part of the phone. This presents particular challenges in antenna placement. Another under-discussed wildcard, in my view, is what battery performance will look like in mmWave.
  1. LTE Will Play an Important Role for the Foreseeable Future. Irrespective of the standalone/non-standalone discussion, LTE is going to be a big part of 5G. For the next several years, it’s going to be ‘islands of 5G’ in a sea of LTE. We will need LTE in order to continue to provide reliable voice coverage (yes, some people still make calls on their phones), since a standalone 5G network would be all IP and would not have reliable enough coverage to support voice.Even with all the hype of 5G, the LTE roadmap is compelling. Don’t get me wrong – 5G is a big jump up from 4G in many respects and opens up some new market opportunities. But, if the operators enable some of the new capabilities in the LTE roadmap, your phone will be able to do pretty much anything you would want it to do, until someone comes along with that killer AR/VR app. Economics and data capacity will be big drivers of the move to 5G.
  2. 5G Will Be About A Lot More Than Smartphones. This is really one of the big stories in 5G. It’s being built to support a very large number of connected devices, with highly varying demands on the network. This has been the #2 or #3 item in 5G PowerPoints up till now, but this is being incorporated as a development priority in reality. At the Qualcomm event, we saw some impressive real-world simulations of millions of IoT devices connected with a part of a city.
  3. Major Investment is Going into Building New ‘Ecosystems’ for 5G. I was impressed with the level of effort going thinking about specific solutions for some very large verticals, among them the automotive and the industrial IoT segment.

There will be a lot more 5G related news in the coming weeks. But on some of the most challenging aspects of 5G development, things appear to be well on track and the accomplishments are impressive.

News You might have missed: Week of February 9, 2018

Nest is rolled back into Google

On Wed. Alphabet Inc announced that it rolled Nest into its hardware group. Under the new org structure, Nest CEO Marwan Fawaz reports to Google’s hardware chief, Rick Osterloh, a former Motorola executive who took charge of all Google’s consumer devices in 2016. That includes Google Home smart speakers, Pixel smartphones, and Chromecast streaming devices.

Via Reuters

  • With Tony Fadell gone, you might remember he left Nest in the summer of 2016, and with Google renewed focus on hardware it makes sense for Nest to fall under Rick Osterloh’s organization.
  • As I pointed out on Twitter if you followed the Pixel 2 launch you had to know this change was coming. At the event, the new Nest Camera was playing very nice with Chromecast and Google Home devices, yet it was not as deeply integrated as it could have been.
  • This very point is the main drive behind this move. Google is playing catch up in the home to Alexa and with Amazon getting into security cameras and God knows what else in the future, Google has to make sure that Google Assistant is as deeply entrenched into our homes as it can possibly be.
  • We have seen Google offer bundles of devices, last year, for instance, there was a deal that paired Nest products over $100 with a free Google Home Mini and these type of offers will help penetration.
  • While bundling is important it is more developing solutions with Google AI right insight those products and if Nest remained an independent company Google would have had to treat it in the same way as it would have any other hardware partner. There is no question, however, that deeper integration, especially when it comes to an Assistant, is the best solution for both users and Google. It allows staying true to the vision Google has, offers better alignment with hardware and ultimately drives richer user engagement.
  • One big criticism that was put to Nest was how slow it has been moving in developing products with its biggest line up announced only a couple of months ago. It will be interesting to see if that will change now under Osterloh.
  • It also remains to be seen if there is anything that Google can do on current announced products that have yet to ship or if it is simply too late in the cycle and we need to wait for a new set of products that will be co-designed and engendered.

Twitter turns Profitable, at last!

Twitter reported a profit of $91 million in the last three months of 2017, compared with a loss of $167 million in the fourth quarter of 2016.  After aggressively slashing spending over the past few years, Twitter will invest in products this year that increase audience engagement, which will cause expenses to “more closely align with revenues,” Chief Financial Officer Ned Segal said during a conference call. Twitter will be doing more experimentation to make its timeline more “personalized and relevant” to people, Dorsey said. He also emphasized a focus on matching people with their interests as fast as possible. There will be “a much more cohesive strategy” around events, like seeing sports scores during live games, Dorsey said. New product tweaks, like Twitter’s decision to increase the character limit to 280, have increased engagement and minimized confusion, he said.

Via Bloomberg

  • The company said it had 330 million average monthly users in the fourth quarter, up 4 percent from the prior-year period, but flat from the third quarter. The number fell short of expectations, but Twitter pointed to the crackdown on bots and fake accounts which ultimately will improve experience across the board.
  • Interestingly the number of subscribers grew internationally during the quarter but dropped in the US. Considering the recent discovery of Russian bots accounts that were used to influence not just the elections but US politics overall, the trend checks out as the majority of the accounts killed by Twitter would have been US accounts.
  • The big question is whether or not the subscribers’ numbers will grow next quarter as while I am sure fake accounts and bots will continue to get to Twitter, I would imagine the bulk of the spring cleaning.
  • Lack of subscribers growth will substantially temper today’s optimism as it would push Twitter to squeeze more advertizing revenue from current subscribers rather than being able to spread over a bigger base.
  • COO Anthony Noto’s recent departure has raised concerns over future quarters performance and continued to turn around, which is a valid concern. One would hope that the wheels are in motion from a business perspective to prioritize engagement to make the platform more attractive to advertizers.

Goldman Sacks might finance Your next Apple Product

Apple is in talks with its investment bank Goldman Sachs about the possibility of offering customers financial loans when buying Apple products, according to a report by the Wall Street Journal on Wednesday. Right now, customers can get financing through Citizens Financial Group when they use the iPhone Upgrade Program, which offers interest-free cash for 24 months. Apparently, Goldman Sachs wants a part of that action, although whether it would be as a replacement for CFG or an additional option isn’t clear.

Via Macrumors 

  • If true, Goldman Sacks would provide these loans through Marcus, an online lender created in 2016 that helps people refinance credit-card debt. Apparently there are plans for the company to be building a ‘point-of-sale’ financing business that will offer loans to shoppers at checkout basically helping consumers not to crank up that credit card debt in the first place.
  • Many received these speculations positively for Apple, but given the lack of detail, we are left with more questions than answers. First, if the Goldman Sacks loans are supposed to just take over the current Citizen Finance Group’s loans, it would not help Apple expand the opportunity unless Apple is getting a better deal and they are able to lower repayments. I doubt that the approval process would be different as it would imply a bigger risk for Marcus. The offer might expand to iPhones that are on longer cycles which might be less appealing to Apple but, longer term, might alleviate the replacement fatigues of the higher priced model.
  • There are also speculations that the loan offer will expand beyond iPhones. The current financing option for non-iPhones is provided through a Barclays credit card, which is interest-free for a certain number of months based on the amount purchased. If you get a Barclaycard Visa and buy an Apple device within 30 days, you can get the device with no extra charge as long as it is paid off within 6-18 months, depending on how much is put on the card. For the higher ticket items, 6 to 18 months is likely still limiting Apple’s addressable market. Goldman Sachs apparently is planning on offering a loan at a 12% interest rate for customers who might want to take longer to pay off a device than what the current options allow
  • As I mentioned in my Apple earnings analysis, With iPhone prices increasing and replacement cycles lengthening, Apple would be smart in looking at different financing options, especially in the US market where there is a bigger opportunity for its flagship models. We’ll certainly keep an eye on this development.

Attack of the Chromebooks

Google is about to make a hard push with Chromebooks. Chromebooks have had back-to-back holiday quarters in the US where Chromebooks were one of the bright spots regarding growth. Google seems to be orienting themselves to initiate a strategy to grow Chromebooks outside of the only market where they have meaningful sales–education.

Google wants to break Chromebooks out of education and try to get more consumers and businesses to ditch their Windows or Mac notebooks and jump to Chromebooks. While I appreciate Google’s ambition, there are some trends that suggest this is going to be a steep hill for Google to climb.

Rising PC ASPs
The steady, and fascinating, trend in the PC market has been the continued rise in average selling price. This has been going on for two years, and it is something I predicted in 2014 when I published a state of the PC market report that year. I outlined several scenarios, one where ASPs declined and one where they rose. I stated the latter was my conviction of what would happen in the market as consumers were shopping for PCs as mature consumers which is a mindset that almost always leads to a rise in ASP. When consumers know what they want, and why they want it, they start looking for solutions that better fit their needs and that generally leads them to spend more money on the product to meet their needs.

Sure enough, since 2014 ASPs of PCs have been slowly rising as consumers look at brand, design, and key features and are willing to spend up/invest in the product since they are going to hold onto their PC for 5-6 at least. They spend up so it will last, and fill their needs for as long as possible.

PC Gaming
In light of what is one of the biggest sleeper trends around, the return of PC gaming, I found this article from the Information to be intriguing. The article gets the scoop on a project within Google to bring a gaming service, which streams PC games that reside on the server, code-named Yeti. If you are familiar with a service called OnLive which tried something similar a few years ago, this appears to be the same idea.

Streaming PC games failed back then, and I don’t have much hope it works now even if Google does it. PC gamers want the highest resolution gaming experience, with the fastest frame per second experience possible. To them, anything less is a competitive weakness. Dropped frames, lag, or not having a high-resolution visual experience can all mean death in the world of competitive online gaming.

That being said, PC gaming, good old-fashioned Windows gaming PCs is coming back into style in a big way with younger consumers. When we scratch below the surface of a range of data points surrounding the PC industry and combine that with end-user research from the gaming console market, it becomes clear that for those under the age of 20, they are dramatically decreasing their console gaming time and moving to PC gaming.

I’ve seen a range of data points that this particular demographic has decreased their console gaming time anywhere from 40-50% over the past three years. That time is moving to Windows PC gaming. I’ve interviewed more than a dozen Gen Zers across the country as to why they are switching, and it boils down to two things. All their friends are moving to PC gaming, and the pace of innovation of the games themselves is much faster in the PC world. A game called PUBG (Players Online Battleground) is a big source of this momentum.

This is a reason why PC gaming rigs from ASUS and ACER that cost under $1000 have seen an uptick in sales. What Google doesn’t grasp about Chromebooks is kids use them in schools up to High School, and then they move to PCs for High School and above. So kids go to their parents when its time for to get a PC for school and also want to play PC games with it. The parents agree getting a PC is a good idea but don’t want to spend 1400 dollars for a high-end gaming notebook. This is where the capable sub $1000 gaming notebooks from Acer and ASUS become great value for the money and a win-win for parents and kids.

Couple this with what NPD has confirmed as the single largest YoY growth category in US consumer retail in PC monitors and you can see the trends aligning as PC gaming notebooks had a lot to do with this trend.

Neither this trend nor the rising PC ASPs, signal in any way the broader market wants to move to less capable more thin-client computing as Chromebooks offer. But there is the last point still sticks out in my mind.

For Google to truly make a push with Chromebooks and have a prayer to take any share from Microsoft and Apple in the notebook PC market they will need to do something they historically have struggled with–get developers to actually develop for their platform. Yes, they have developers write apps for Android but rarely are those apps optimized for Android. This point sticks out like a sore thumb when it comes to Android tablets where apps are largely just blown up/scaled up versions of their small screen apps, and now larger screen optimization has taken place.

Part of this strategy by Google for Chromebooks is predicated on ChromeOS running more native apps via Android. The hope of Google here is that by bringing a more native app experience to ChromeOS, the platform will function more like a notebook with optimized Android apps for the big screen and notebook form factor. I’m most skeptical about this point as a giant hurdle standing in the way of Google’s ambitions with ChromeOS.

I want to conclude with an important observation. There is an event coming that will be fascinating to watch. There are approximately 350-400 million devices out there running Windows 7. Microsoft is ending Windows 7 support in 2020 and has urged customers to upgrade in the next three years. This end of life support will accelerate the upgrade process in certain markets and what those nearly half a billion customers decide to buy next is up in the air. Everyone competing with Windows here hopes they are in consideration and this includes Apple with Macs and iPad Pro and Google with Chromebooks. With this event coming, Microsoft’s competitors smell blood in the water and want to pounce on this opportunity. Our research is yet to suggest doom for Windows in this scenario, but I don’t think Apple, in particular, has shown their full hand at their strategy to take big market share from Microsoft.

Ex-Intel President Leads Ampere into Arm Server Race

In a world where semiconductor consolidation is the norm, it’s not often that a new player enters the field. Even fabless semiconductor companies have been the target of mergers and acquisitions (Qualcomm being the most recent and largest example) making the recent emergence of Ampere all the more interesting. Ampere is building a new Arm-based processor and platform architecture to address the hyperscale cloud compute demands of today and the future.

Though the name will be new to most of you, the background and history is not. Owned by the Carlyle Group, which purchased the Applied Micro CPU division from MACOM last year, Ampere has a solid collection of CPU design engineers and has put together a powerful executive leadership team. At the helm as CEO is Renee James, former President at Intel, leaving the company in 2015. She brings a massive amount of experience from the highest level of the world’s largest semiconductor company. Ampere also touts an ex-AMD Fellow, former head of all x86 architecture from Intel, ex-Intel head of platform engineering, and even an ex-Apple semiconductor group lead.

Architecturally, the Ampere platforms are built with a custom core design based on the Arm architecture, utilizing the ARMv8 instruction set. Currently shipping is the 16nm processor codenamed Skylark with 32-cores and a 3.0 GHz or higher clock speed. The platform includes eight DDR4 channels, 42 lanes of PCI Express 3.0, and a TDP of 125 watts. The focus of this design is on memory capacity and throughput, with competitive SPECint performance. In my conversation with James last week, the emphasis on memory and connectivity is a crucial component of targeting lower costs for the cloud infrastructure that demands it.

The second generation of Ampere’s product stack called Quicksilver is coming in mid-2019. It will move to the updated ARMv8.2 instruction set, increase core count, improve overall IPC, and add multi-socket capability. Memory speed will get a bump and connectivity gets moved to PCI Express 4.0. It will include CCIX support as well, an industry-standard cache coherent interface for connecting processors and accelerators from various vendors.

Interestingly, this part will be built on the TSMC 7nm process technology which Ampere CEO James says will have a “fighting chance” to compete or beat the capabilities provided to Intel by its own in-house developed process technology. That isn’t a statement to make lightly and puts in context the potential impact that Intel’s continued 10nm delays might have for the company long-term.

For systems partnership, Ampere is working with Lenovo. This is a strong move by both parties, as Lenovo has a significant OEM and ODM resources, along with worldwide distribution and support. If the Ampere parts do indeed have impact in the cloud server ecosystem, having a partner like Lenovo that is both capable and eager to grow in the space provides a lot of flexibility.

Hardware is one thing but solving the software puzzle around Ampere’s move into the hyperscale cloud server market is equally important. James told me that the team she has put together knows the importance of a strong software support system for enterprise developers and seeing that happen first hand at Intel gives her a distinct advantage. Even though other players like Arm and Qualcomm are already involved in the open source community, Ampere believes that it will be able to make a more significant impact in a shorter period, moving forward support for all Arm-processors in the server space. Migrating the key applications and workloads, like Apache, memcache, Hadoop, and Swift to native, and most importantly efficient, code paths is required for widescale adoption.

Followers of the space may be wondering why now is the right time for a company like Ampere to succeed. We have seen claims and dealt with false promises from numerous other Arm-based server platform providers, including AMD and the source of Ampere’s current team, Applied Micro. Are the processors that much different in 2018 from those that existed in 2013? At their core, no. But it’s the surrounding tentpoles that make it different this time.

“Five years ago, this couldn’t have happened,” said James in our interview. The Arm architecture and instruction set has changed, with a lot more emphasis on the 64-bit superset and expanding the capability for it to address larger and faster pools of memory. Third party foundries have caught up to Intel as well – remember that James believes TSMC’s 7nm node will rival Intel competitively for the first time. Finally, the workloads and demands from the datacenter have changed, moving even further away from the needs of “big cores” and towards the smaller, more power efficient cores Ampere and other Arm options provide.

Obviously, that doesn’t apply to ALL server workloads, but the growth in the market is in that single-socket, memory and connectivity focused segment. AMD backs up Ampere’s belief here, with its own focus on single-socket servers to combat the Intel dominated enterprise space, though EPYC still runs at higher power and performance levels than anything from the Arm ecosystem.

James ended our interview with a comparison of the Arm server options today to x86 servers more than 25 years ago. At the time, the datacenter was dominated by Sun and Sparc hardware, with Sun Microsystems running advertising claiming that Intel’s entry into the space with “toy” processors wasn’t possible. Fast forward to today and Intel has 99% market share in the server market with that fundamental architecture. James believes that same trajectory lies before the Arm-based counterparts rising today, including Ampere.

There is still a tremendous mountain to climb for both Ampere and the rest of the Arm ecosystem, and to be blunt, there is nothing that proves to me that any one company is committed completely. Qualcomm has announced its Centriq CPUs last year and Ampere claims to have started sampling in 2017 as well. We don’t yet have one single confirmed customer that has deployed Arm-based systems in a datacenter. Until that happens, and we see momentum pick up, Ampere remains in the position that previous and current Arm-based servers are found: behind.

Can the PC Market Ever Grow Again?

One of the big questions we at Creative Strategies get asked about by all of the big PC and semiconductor companies who have much skin in the PC game is whether the PC market could ever grow again. If you look at the Gartner chart below, you see that starting in 2012; the PC industry has been in decline significantly. Since 2011, the PC market shrank by 32%, and while 2017 numbers are not in yet, we believe it was down to 3-4% last year. That is a huge drop in PC sales that has had a major impact on just about everyone in the PC ecosystem today.

For the last seven years, when asked about the PC market ever growing again, our answer was always no. But recently, we have seen some significant technology becoming more available that can be applied to PC’s that, at least for a few years, could reverse the negative decline in PC’s and cause millions of users to upgrade there PC’s even if they are only two years old.

At the heart of this new technology, push is Qualcomm’s “Connected PC” design that they launched in Maui last December. Touting a more powerful Snapdragon processor with a cellular radio built into the overall chip design, and making it always connected as well as powerful enough to run Windows 10S. Qualcomm and Microsoft hope to entice people to upgrade their PC’s faster using the “Connected PC” idea and promising them a better overall experience since their PC, like a smartphone, would always be connected.

However, as I pointed out in my Techpinions Column on Monday, our research shows that what people really want is longer battery life and this was the #1 requested feature in our recent survey on what features people want when they buy a new laptop. But interest in adding a cellular modem to their laptop came out at #6 in this survey.

The good news for Qualcomm and Microsoft is that the other part of the “Connected PC” program is Snapdragon’s ability to deliver long battery life in these new laptops. At the event, they stated that a “Connected PC” could deliver at least 22 hours of continuous use and sources tell me that QQ is working on more advanced processors that could give users closer to 30+ hours by early next year.

I see two major things that will happen that could impact a greater demand for laptops by mid to late 2018. That means that perhaps by 2019, we could see new demand for PC’s rise as people want a laptop that delivers all-day computing. If so, this is something that I believe could be a catalyst for a major three-year refresh cycle for portable PC’s and see the PC market grow again.

The first thing that will be in play is a major battle between Qualcomm and Intel to become the one who delivers the ultra-long battery life that people want if they are to upgrade their PC. This is where I see Qualcomm having a major advantage over Intel. From what sources tell me, Intel’s most advanced mobile processors that will be available mid-year and, at best, can deliver only 18-20 hours of continues use. Qualcomm has already guaranteed 22 hours with their current 835/845 Snapdragon processors, and I do not doubt that by late 2018 they could deliver potentially up to at least 30 hours of continuous use.

That said, Intel will have an advantage from the performance standpoint since all Windows operating systems and apps can run natively on their X 86 processors. Microsoft is working very closely with Qualcomm to make a version of Windows 10S work very well on Snapdragon with minimal emulation. If Microsoft can deliver on this promise, it could help Qualcomm make significant inroads with PC vendors, who already know that the #1 thing their customers want is longer battery life and need to make that a part of their product roadmaps by years end.

The second thing I see happening could be a huge market push by Qualcomm, Intel, Microsoft and all PC vendors to launch and brand a new category of laptops, loosely called “All-Day, Always-On” laptops. As I stated in Mondays Techpinions column, I felt that Qualcomm’s emphasis on the connected PC idea was, from a marketing viewpoint, off the mark and felt they should have lead with all day computing message first and tied always connected to their overall design messaging.

If Qualcomm and Intel, along with Microsoft and PC vendors create all day laptops and market the dickens out of these new systems, I suspect it will resonate well with business and consumers users who have told us battery life is the most important feature they want in a new laptop. More importantly, all day computers could be the real motivation for people to start upgrading their PC’s faster than normal and get the demand for PC’s in positive territory at least through 2021-2022.

Reflections on an interview with Uber’s Chief Brand Officer Bozoma “Boz” Saint John

Last Thursday I had the great opportunity to attend an event titled “Driving Change” at the Computer History Museum in Mountain View where Verge Editor Lauren Goode masterfully moderated over an hour long conversation with Uber’s Chief Brand Officer Bozoma Saint John.

Like many, I had been blown away at Apple’s Developer Conference in 2016 when this force of nature that is Bozoma Saint John walked onto the stage and was able to instantly captivate the attention of a room full of geeks and press. Self-proclaimed “badass Boz” became the face of Apple Music and to some extent helped Apple shake off that “all white male” image.

When Saint John took her role at Uber as Chief Brand Officer, I wished her well given the mess Uber was in and how much work it would require turning the brand around. It is not often that in tech you see a Black leader and even less so a Black Woman as a leader, and Saint John, while she has her work cut out for her, has the opportunity to be the mastermind of such a big brand turn-around.

When my husband flagged this event to me, I put it on the calendar straight away. As a tech analyst, I was eager to understand more about the Uber culture, her role and how she planned to make a difference. I also happen to have a biracial daughter. So as a mom, I look for opportunities for her to see and meet smart, driven, successful women and women of color in particular. This event was a great double whammy!

Needless to say, I  got what I was hoping for and then some!

Saint John made several interesting points on marketing and branding one of which was about measuring success. When she was asked how she will measure success, she said she would use all the available data such as net promoter scores and brand affinity. However, she believes more in softer measures like “being proud of walking to the store wearing an Uber t-shirt.”  While this might seem like a warm and fuzzy kind of answer, I think it is precisely the kind of measure Uber needs.

While I was interested in what Saint John had to say about marketing and branding, I was particularly looking forward to listening to her views on diversity in tech. Here are some of my takeaways.

On Diversity…

There were so many good points made in the conversation. As I was listening to what was being discussed, I kept on thinking about the fact that the room I was in was the most diverse audience I had been in since I moved to Silicon Valley six years ago. Undoubtedly the most diverse audience I have seen at any event that was directly or indirectly related to tech. My daughter had never seen so many black women in one room, and it was empowering for her. Their participation in the room and on Twitter told me that aside from marketing and tech, they were there for leadership and inspiration. This speaks of the need for diversity leaders we can identify in. Diversity leaders who can show us that there is room at the table for people like us. Of course, this does not stop with gender and race. Representation is crucial.

When it comes to tech companies, Saint John said it is shameful how few black employees there are. Changing this issue, she argued, is not the sole job of a diversity officer, more often than not a person of color. The responsibility of driving diversity rests with the CEO and the whole management of a company. As I was listening, I could not help but think she was referring to Apple, which might be unfair on my part. Tim Cook has been a very vocal advocate for diversity, but while the numbers within the company have been growing, they have been doing so at a very slow pace. Yet, Apple is ahead of many other tech companies which is what is most discouraging. Hiring practices must change to see a significant impact. If you are trying to diversify your work force and your talent pull remains Silicon Valley, things will not change. Tech must broaden its reach when it comes to talent pull and must support schools and organizations across the US to work with kids from minority groups, so they have a chance to get into tech, grow their talent and get ready for the job market. Such work must be done with a sense of urgency and higher goals should be set for what is considered success. One percentage point increase in lower paying jobs is not what companies should be aiming to achieve over a year.

Another fascinating point Saint John made was about the great responsibility minority leaders have on their shoulders. They get judged for who they represent: women, Black, gay, Muslim…But this is not how any white person is judged. I am sure that for any minority person reading this article I am stating the obvious but this might be news to others. I am fortunate, I only have gender to contend with but I often think how something I do or say would reflect on other women. But the reality is that I am me and, for good or bad, there is nobody else like me so why should I feel responsible or made to feel accountable for a whole group of people?

On Being Yourself…

“If you say something, own it! Don’t add LOL at the end of your strong statement to soften the message” said Saint John to a member of the audience who signed her question “impatient black woman LOL”.

I know I do that all the time, especially with male colleagues, clients, and peers, the verbal equivalent of “just kidding” that I add at the end of a criticism, or an opinion just so that I do not come across as threatening. I think many women do this, independent of the color of their skin and we really should stop worrying and starting owning what we say.

Maybe the best message that could have been given to my daughter and any young person who looks or feels different from the status quo was that it is ok to be different. Not only it is ok to be different, but if you are in an environment that does not allow you to be yourself, you should not be there. You also should not fit a stereotype that others have created for you so they feel safe.

I was not sure how much of what was said on stage sunk in with my daughter. Meeting Lauren and Boz at the end of the night was for sure the highlight of her evening. On the way home, though, as I asked her what she learned and she said: “I learned that is ok to be me and that I am not everybody else that looks like me I am just me.” I smiled and kept on driving, pride oozing from every pore.

Wearables to Benefit from Simplicity

Sometimes simplicity really is better—especially for tech products.

Yet, we’ve become so accustomed and conditioned to believe that tech products need to be sophisticated and full-featured, that our first reaction when we see or hear about products with limited functionality is that they’re doomed to failure.

That’s certainly been the case for a while now with wearables, an ever-evolving category of devices that has been challenged to match the hype surrounding it for 5-plus years. Whether head-worn, wrist-worn, or ear-worn, wearables were supposed to be the next big thing, in large part because they were going to do so many things. In fact, there were many who believed that wearables were going to become “the” personal computing platform/device of choice. Not surprisingly, expectations for sales and market impact have been very large for a long time.

Reality, of course, has been different. It’s not that wearables have failed as a category—far from it—but they’ve certainly had a slower ramp than many expected. Still, there are signs that things are changing. Shipments of the Apple Watch, which dominates many people’s definition of the wearable market, continue to grow at an impressive rate for the company. In fact, some research firms (notably IDC) believe the numbers surpassed a notable marker this past quarter, out-shipping the collective output of the entire Swiss watch industry in the same period. Now, whether that’s really the most relevant comparison is certainly up for discussion, but it’s an impressive data point nonetheless.

Initially, the Apple Watch was positioned as a general-purpose device, capable of doing a wide range of different things. While that’s certainly still true, over time, the product’s positioning has shifted towards a more narrowly focused set of capabilities—notably around health and fitness. While it’s hard to specifically quantify, I strongly believe that the narrower device focus, and the inherent notion of simplicity that goes along with it, have made significant contributions to its growing success. When it comes to wearables, people want simple, straightforward devices.

With that in mind, I’m intrigued by news of a new, very simple glasses-based wearable design from an organization at Intel called the New Devices Group. These new glasses, called Vaunt, look very much like traditional eyeglasses, but feature a low-power laser-based display that shoots an image directly onto your retina. Enabled by a VCSEL (vertical-cavity surface-emitting laser) and a set of mirrors embedded into the frame, the Vaunt creates a simple, single-color LED-like display that appears to show up in the lower corner of your field of view, near the edge of your peripheral vision, according to people who have tried it. (Here’s a great piece describing it in detail at The Verge.)

There are several aspects of the device that seem intriguing. First, of course, the fact that it’s a simple, lightweight design, not a bulky or unusual design, and therefore draws no attention to itself, is incredibly important. In an era when every device seems to want to make a statement with its presence, the notion of essentially “invisible” hardware is very refreshing. Second, the display’s very simple capabilities essentially prevent it from overwhelming you with content. Its current design is only meant to provide simple notifications or a small amount of location or context-based information.

In addition, while details are still missing, there doesn’t seem to be a major platform-type effort, but rather a simple set of information services that could theoretically be embedded into the device from the start, or slowly added to it in an app-life fashion that seems more akin to how skills get added to an Amazon Alexa. So, for example, it could start out by providing the same kind of notifications you currently get on your phone, but start to add location-based services, such as directions, or simple ratings for restaurants that are literally right in front of you, as well as intelligent contextual information about a person you might be having a conversation with.

Key to all of this, however, is that the design intentionally minimizes the impact of the display by putting it out of the way, allowing it to essentially disappear when you don’t actively look for it. That ability to minimize the impact of the technology—both functionally and visibly—as well as to intentionally limit its capabilities is a critically important and new way of thinking that I believe can drive wearables and other tech devices to new levels of success.

Ironically, as we look to the future evolution of tech devices, I think their ability to become more invisible will lead them to have a more pervasive, positive impact on our lives.

Diving Deeper on HomePod

I encourage you to read my public thoughts on HomePod from spending about a week with Apple’s newest addition to the product family. I think a caveat needs to be made with my take on HomePod. I’m not the normal consumer who will get their hands on this product and form an opinion. Due to the nature of my job, I use more technology, and try a vast array of products and integrate them all into my life in ways most consumers will never do. So the comparisons I can make of products against each other are not things normal people will ever experience.

Which is the main reason why the things I believe Apple could/should/and will eventually add to HomePod are not the things a normal consumer, getting their very first smart speaker will want or even think about. I like to say that as a part of my job I have to try to live in the future in the present day. All that to say, the things I want, are not the things most people want—yet. But my belief is at some point in time; consumers will want many of the same things I do but not until they have used these products for some time. So, not to say my take is not valid, only that I’m coming from the viewpoint of a mature user in a very immature market.

For that reason, I think Apple made the right decisions with HomePod to meet the market where it is at just as it is getting started in every area but the price. Normally, I never worry about Apple’s pricing, but in this case, I do think the price tag is going to be hard for many consumers to swallow.

I made the point in today’s article that if you really care, and are picky about the sound quality of your music, then no doubt spend the money on HomePod. But that is not most consumers. Most consumers aren’t sitting in their living room appreciating a great song/artist/album in solitude and a glass of wine. For most consumers, and specifically those of smart speakers we observed, music is simply background noise or simple ambiance. For most use cases, when music is background noise quality is irrelevant. For this use case, any smart speaker will do in reality. Now, there is an area where HomePod could speak to a broader base, and that is when it supports Apple TV and can be used to not just control Apple TV but play back audio from movies and TV. I do believe that is an area consumers care a bit more about audio than just ambient background noise.

All of that to say, HomePod is going to be an evolving story in an increasingly competitive market. I’ll be fascinated to see how Apple continues to develop HomePod from a functionality standpoint as they straddle a difficult line of making Siri useful from both a personal and communal standpoint.

HomePod’s Engineering

One of the parts of the HomePod experience that was quite interesting to me was a tour we received of Apple’s audio labs. During this tour, Apple took us through where they develop and test, audio experiences related to their products. In these labs, and the subsequent work that derived from these labs, the audio experiences on iPad, iPhone, Mac, and more were created. Anyone who has used a product like a new Mac, or iPad Pro, even the latest iPhones, knows the speakers on this product are quite impressive. This audio lab is where the quality sound of the speakers in Apple products are created and tested. And from these labs came HomePod.

CAPTION: An extremely quiet Noise & Vibration chamber in Apple’s sound lab in Cupertino used to measure the noise floor of HomePod..

This whole experience reinforced a theme around Apple I’ve written about before on their attention to detail. What Apple does with audio in their products is nothing short of incredible when it comes to their attention to detail, and it shows with HomePod. While it seems years of audio engineering expertise led to the amazing sound of HomePod, I’m curious if those learnings now manifested in HomePod may make their way to other devices like AirPods for example. I fully understand the engineering differences between these two products but HomePod sounded so good, I got addicted to how good it sounded and wanted that sound on all my devices. Maybe this is me dreaming but so be it.

Apple Music/HomePod Exclusives
Given the quality of the sound, the other thing I got to thinking about was what if artists starting creating HomePod specific mixes of their albums as Apple Music exclusives. While HomePod already sounds amazing, it could sound even more amazing if artists starting mixing tracks specific to HomePod’s unique audio experience. Thus giving consumers a unique and authentic listening experience to a track that is truly aligned with how the artist wanted the music to sound.

Strategically, this kind of effort on behalf of artists can only help both HomePod and Apple Music. I’m intrigued by this idea of HomePod/Apple Music exclusives and what artists could do to take advantage of this. For example, what if John Mayer decided to do a live/acoustic set for his fans via HomePod? You could argue that he could do this without HomePod. However, HomePod sounded so amazing that when I played some of John Mayer’s live acoustic sets, it sounded like he was in my living room giving me a private concert.

Whether this happens or not, this line of thinking is interesting as we think about products like this coupled with Apple’s efforts in exclusive content. It isn’t too outside the lines to think some of these scenarios are possible once the installed base of HomePod became large enough.

Playing the Long Game
Apple is taking the marathon approach here, compared to Amazon’s sprint approach, and both are viable given each companies desires to compete in this space. Apple’s positioning with HomePod is clear as music is the focus. Over time, I expect the assistant story to grow for Apple and begin to catch on with consumers as well.

The biggest question is how much time does Apple have? We don’t know the answer but it is an important question surrounding Apple strategy for getting Siri more involved in the home.

Last point, for now, the real challenge facing HomePod is how it will convince consumers (broader market not audiophiles) to care about sound again. In my opinion, this happens when you compare HomePod to competition. My fear is Apple will not set up this type of demo like we received in their retail stores which will make it hard to truly hear the difference. When you hear a product like Sonos One, and even Google Max to a degree, all by itself it sounds great. It isn’t until you hear them compared to the HomePod that you can truly grasp and appreciate how much better it sounds. This will be Apple’s challenge to give consumers this experience.

I’m very curious how they handle this product at retail and we will get a chance to see that this weekend.

You Can’t Unhear Apple’s HomePod

Before receiving my Apple HomePod to review, I found myself in a house in Noe Valley in San Francisco. Apple invited me to see and experience HomePod in a unique home setup before taking one home to try for myself. I’ll spare you the details of the entire demo as there was one demonstration where HomePod’s value was truly made clear.

On an entertainment center, that looked like a retro design out of the 70’s with silver and copper knobs, wood like old cedar, and metallic grates, sat a Sonos Play One (Alexa enabled), A Google Home Max, Apple’s HomePod, and an Amazon Echo 2nd generation.

This demo was not that different from the one I had in June at Apple’s World Wide Developer Conference. At that event, the demo was HomePod, an Amazon Echo (first gen) and a Sono Play 3. Even in that demo, which was a highly controlled room, HomePod was head and shoulders better sounding.

This demo included the much-praised Google Home Max and a quality new speaker from Sonos in the Alexa enabled Play One which I own three of personally. When you listened the comparison of each speaker to the HomePod, you realized how the audio engineers at each company focused on different things. The Google Max focused on bass; it had a lot of bass, so much, in fact, the product ships with a rubber mat it recommends you place the speaker on. Given how much bass the Google Max emphasizes, at the expense of clean vocals and certain instrumentation, I imagine this rubber mat is to help limit vibrations the speaker may make on any hard surface due to the amount of bass it emits. Google Max was too heavy on bass for my preferences.

The Sonos Play One, again which I own three personally, was the second best sounding after HomePod. I felt the Sonos sound engineers (in comparison to the others) did a good job balancing sound more in the mid-range and not emphasizing too much bass or too much treble. It was clean and balanced. The Echo sounded the worst by a large margin. But the HomePod was a different audio experience entirely.

Listening to these demos side by side, and even once I got HomePod home and had it play in my living room comparing it to my Sonos Play One and my Amazon Echo, what hit me was once you hear the HomePod it is hard to unhear it. Once you listen to it and experience it for yourself, there is no going back. My Sonos, as great as it sounds, and my Echo’s just didn’t sound the same after listening to the same songs on the HomePod. You can’t unhear the quality of the HomePod, and it will change your opinion of many others speakers you may own. There was no going back. I can say, with absolute confidence, the HomePod will be the best sounding speaker many people have ever owned.

Many of my friends have asked me how I would describe what HomePod sounds like. With the caveat that you just have to hear HomePod to truly experience it, let me attempt to articulate my experience.


The Sound Experience

HomePod has what can only be explained by the most balanced audio, not just of any smart speaker but of any speaker I currently own. Which includes a number of Sonos speakers, and a Bose Home Theatre system. By balanced, I mean evenly distributed quality sound. With many speakers and sound systems, there is a zone of perfection. That is a specific place, or alignment of your body, where the system sounds the best. HomePod is unique in that it doesn’t have a singular place where it sounds the best. Apple’s engineers designed HomePod to sound the best no matter where you are in the room. In our demo, Apple explained how this was done technically which is beyond my limited understanding of audio engineering. But I did try to prove them wrong and failed. HomePod truly did sound great from any place in the room.

The other thing that really impressed me about HomePod was how great it sounded at nearly every volume level. If you have any experience with speakers you know, there is also a sweet spot for volume. Too low and you lost almost all bass, and too high you blow out the high end/treble and often your ears hurt as the high-end parts of the audio start to distort and lose clarity. With HomePod even at low volumes, you heard balanced bass and clarity across the spectrum of sound frequencies as was the case when you put HomePod to >90% volume. What was impressive was how you could stand right next to HomePod even when the volume was quite high and did not feel like your ears were being blown out, and it maintained clarity in the sound.

Interestingly, what both the balance of audio, and the way sound is distributed, I found HomePod even sounded terrific at a distance. Meaning when I was in other rooms of my house. HomePod lived in my living room, and even as I moved as far away as the upstairs, I could still hear the distinct bass, vocals, and overall clarity. When I tried the same with my Sonos or Bose, HomePod won in all of these tests of overall audio quality.

I have no doubt, HomePod will compete with the best speakers in your house even if you have an expensive/high-end setup. Granted that is not a massive portion of the market, which is why I’m confident in saying that for most consumers HomePod will be the best sounding speaker they have ever owned.

Siri on HomePod
Now, we have to talk to about the experience with Siri. It is difficult with me to go as deep as I want here without fully articulating why I feel Siri on HomePod is an important strategic initiative for Apple. I have gone into depth for our subscribers in these articles which I encourage reading. But, Apple’s strategy with Siri has to be zero friction in access to Siri no matter where I am or what I’m doing. Before HomePod being in my home I used Alexa dramatically more than Siri. The reason was simple. I didn’t have to do anything but speak. You may say well Siri is on your wrist, true but I have to raise my wrist and tilt my Apple Watch toward me to initiate Siri. As easy as that may sound, I’m not always in a position to do that, especially when I’m cooking or doing things where my hands are occupied. Second, you may say well Siri is on your phone always listening. True but my phone is not always near me when I’m home. Sometimes I set it on a table in the other room. If it is near me, it is in my pocket. While “Hey Siri” may work when my iPhone is in my pocket it’s an awkward place to engage with Siri.

Having Siri on a truly always listening loud ambient speaker creates a true zero friction engagement with Siri in my home. HomePod became my preferred way to interact with Siri, and I found myself using Apple’s assistant dramatically more than I ever did when at home. Confirming to me why this is such an important strategic move for Apple. And luckily, Siri worked very well and delivered on many of expectations and even some of my past criticisms.

Now, Siri on HomePod is not much different than Siri on your other Apple products. She is a little more limited on HomePod and can’t do everything you can do on your iPhone. The reason for this is because HomePod is designed to be used by others in your household, not just the person who set it up. This works as expected, Siri can play songs, set alarms, check the weather and do a bunch of general tasks for all those in the household even when the person whose iPhone/iCloud account it is tied to is not there.

When it came to music, Siri knocked it out of the park. In fact, because Siri is learning about its owner when you ask to play music, say I said “play Jack Johnson radio” she would say “sure, here is a personalized X for you.” What’s happening is Siri is acting as a “mixologist” as Apple likes to say, but essentially she is playing DJ according to my music preferences. This worked fantastically, and I have about the widest range of music interests as anyone. To me, good music is good music, and I enjoy the creativity of all artists and genres. Usually, this has given Siri troubles, but I found the playlists she generated to be very relevant. Like I mentioned above this feature is not unique to HomePod and functions the same way on other Siri enabled devices, but I make the point because Siri and the music functionality is particularly useful with HomePod given music will be what most consumers use it for primarily. Which is why I was happy Siri delivered on the music experience.

One part where Siri on HomePod really stood out against other smart speakers was the communications element, particularly with text messages. Now, this only works for the person who set up HomePod, which in this case was me, but always being able to send messages and have messages read to me was extremely useful in the home environment. Given I’m in Apple’s ecosystem, I knew the communications/productivity part would be the one angle that differentiated Siri from Amazon’s Alexa and Google Assistant.

Siri on HomePod is not as full-featured as Siri on your Macs or iOS devices, and this was done by design. Siri on HomePod focused on doing a few things well both for the individual and the communal family of the house and from my experience it did those things well. Alexa and Google Assistant do have more features, for now, but are definitely more advanced in their functionality.

Overall, what stood out to me in my experience was the deeper you are in Apple’s ecosystem, the more value you will find from HomePod if you are in the market for a smart speaker. Being able to see what song is playing on your Apple Watch, or quickly move a phone call to the HomePod as a speakerphone from your iPhone, or being able to change songs from your Apple Watch, etc., were all key differentiators for me. Most consumers don’t have as many Apple products as I do, but for those who do HomePod is a great addition to the ecosystem and will be more appreciated in its functionality over competing smart speakers.

To Buy or Not to Buy
Thinking about HomePod within the broader market for smart speakers, it is smart for Apple to emphasize not just music quality but even Siri as it relates to Music. Both those use cases deliver 100% on their value. Music (specifically the experience with Apple Music) is where HomePod, and Siri, will shine. Consumers who lean toward these value propositions first and foremost will be delighted.

To the question of is HomePod worth the premium over a product like the Sonos One that is $199 and has Amazon’s Alexa, I’d say absolutely if you truly care and are picky about sound quality and/or you are deeply embedded in Apple’s ecosystem.

For the time being, if you want a great sounding speaker, with multi-room capability, a bit more full-featured assistant in Amazon’s Alexa (with Google Assistant support coming) then the Sonos Play One is a great option and great value for the money. In fact, the more I compared the Sonos Play One to the HomePod, while HomePod did sound better, I was still impressed with the sound quality of the Sonos comparatively for the price.

This space is going to be interesting to watch. We all have our suspicions for how this market may play out, but we now have a legitimately competitive market with lots of options for consumers at many price points and features. Ultimately, this is when a market gets exciting because with all this competition consumers win.

Why the Connected PC Initiative Misses the Mark

Last December, Qualcomm held a major media event in Maui, HI to launch what they call their connected PC initiative. Qualcomm is best known for their cellular radios that are in almost all smartphones, and their new SnapDragon 735, and 845 processors are now capable enough to also power a laptop. The key idea is to add a cellular connection to laptops using their Snapdragon processors thus making them a “connected PC” since that laptop would always have a connection via WIFI or cellular just as our smartphones have today.

Joining them in this announcement was Microsoft who strongly supported Windows OS on a Qualcomm processor, also known as Windows on ARM. If this sounds familiar, Microsoft launched a similar program with various ARM processor companies in 2014, but it failed since the processors back then were not powerful enough to handle Windows OS and Windows had to be run in an emulation mode which made these ARM-based laptops run sluggishly at best.

This time around the processor that Qualcomm is bringing to the table is fast enough to run Windows OS 10 even when, in some cases, it has to revert to emulation mode to do so.

As I sat through the major presentation by Qualcomm and Microsoft Executives describing their new “Connected PC” program at the Maui event, the first thing I thought was “is this just a new try at Windows on ARM” and remembering what a disaster that was the last time this was tried. But as I got to check out the demos and do some one on one’s with Qualcomm and Microsoft Executives about the role a more powerful Snapdragon processor and a tailored version of Windows 10S created for this program could deliver, I saw that this idea had real merit and potential.

While in theory, I like the idea of always being connected, anytime and anywhere, I knew from our research that connectivity via cellular was not a high priority when it comes to features wanted in a laptop. Indeed, we have had the availability of cellular modems as options for laptops for over ten years, and demand for this feature in laptops is very low.

Another good benchmark to measure demand for cellular connectivity beyond a smartphone is the cellular activation rates of iPads. It turns out that of all iPads sold, around 50% buy up to include a cellular modem. But our research shows that less than 20% of those iPads with a cellular modem in them activate them.

The key reason for lack of real demand for a cellular connection in a laptop or a tablet is the additional cellular costs this adds to a person’s cell phone bill. When I asked one major cellular carriers about how they would price the connection on a connected PC, they said it would be an additional $10 or 12 dollars a month fee, and data used on a laptop would count against the person’s monthly data allotment they pay for already.

I could imagine that a younger demographic user who watches a lot of Youtube videos and accesses a lot of content on their laptops now, could go through their allotted all-you-can-eat 22-25 gig personal data plan in one or two weeks and then their data speeds on both their smartphone and connected laptop go down to 128 kbps.

Our research about the demand for cellular in a laptop was done sometime back so early this year we updated this survey by asking people “what are the three most important features you want in the next notebook or laptop you will buy.” As you can see from this chart below, long battery life, more memory, and larger hard drive storage topped their list. Cellular connectivity came in farther down the list at just over 20% interested, which pretty much maps to our iPad research mentioned above.

The good news for Qualcomm and Microsoft is that while both touted the “connected PC” initiative at the event, they also emphasized that by using these new Snapdragon processors one could get as much as 22 hours of continuous battery life. In talks with their execs after the main announcement, they hinted that people could probably get even more hours of battery life depending on how their OEM partners configured them and the OS versions they would use from Microsoft.

My fear for both Qualcomm and Microsoft is that by leading with the connected PC story and subsequent marketing pushes around this focus, it will not drive the kind of tech adoption they hope to get from this program, and we could have another Windows on Arm failure in the works. The research we did a year ago and in the last week shows that the real interest is in longer battery life. That would drive significant demand for Windows on ARM with QQ this time around provided it delivers the kind of performance they stated at the launch event in Maui in early December. They should brand this the “All Day PC” and make this the new battle cry for laptop upgrades going forward.

This is an important moment for the PC industry. While consumers like new designs that are thinner and lighter, as our survey points out, that is not what drives purchases of new laptops. Longer Battery life, more memory, and storage top their buying criteria. If Qualcomm and Microsoft, along with others, who want to compete with a feature that may drive a new level of demand for laptops in the future then they need to cater to these interests and make cellular connectivity a nice to have feature for those who are willing to pay the connectivity tax they will get from their carriers.

Apple’s Holiday Quarter iPhone Hat Trick

As is often the case, Apple was able to quiet rumors of the iPhone sky falling by announcing stellar results during its quarterly earnings call that put the company at the top of the market in terms of smartphone shipments, average selling price (ASP), and revenues. While total iPhone shipments were down slightly from the year-ago quarter, that doesn’t tell the whole story. Based on IDC’s preliminary numbers for the quarter, Apple shipped more smartphones than any other vendor at 77.3 million units (Samsung shipped 74.1 million). That’s a 1.3% Apple decline versus the broader market that slipped 6.3%. Importantly, Apple shipped these volumes while enjoying an ASP that increased by more than a $100 from the year-ago quarter, driven by strong sales of its iPhone X, 8, and 8 Plus. That means the company drove smartphone revenues of $61.6 billion, to lead the market. Not bad for a quarter when your marquee product started shipping in November.

Apple’s ASP Increases
Apple’s ability to dramatically increase its ASP year over year in a slowing market is impressive. Over the last few years, Apple has repeatedly introduced new iPhones with higher selling prices. While this usually results in an ASP spike during the initial quarter, things tend to fall back in subsequent quarters. But there’s not always a clear pattern to this rise and fall. For example, in 2017 the third fiscal quarter (second calendar quarter) was Apple’s second-highest at $703. For the total year of 2017 Apple’s ASP was $707, up from $647 for the full year 2016, which was down from the previous full-year ASP of $671.

To put Apple’s recent ASP performance into perspective, let’s look at Samsung’s numbers. While Apple out shipped the company in the holiday quarter, Samsung still shipped more smartphones in total for 2017 (317.3 million units versus Apple’s 215.8 million). But Samsung’s ASP has headed the opposite direction. Based on IDC’s data (Samsung doesn’t publicly announce units or ASPs), the company’s calendar-year third quarter ASP was $327 (Note: we don’t have 4Q ASP yet). For the first three-quarters of 2017 combined, Samsung’s smartphone ASP was $313. That’s down from $319 in 2016, and $344 in 2015. (Samsung’s ASP tends to spike in calendar Q2, around the launch of its latest flagship Galaxy S phone).

It’s worth noting that Samsung isn’t the only smartphone company with declining average selling prices, and Apple isn’t the only one with year-over-year increases. In fact, many of the top ten smartphone vendors have managed to increase their ASPs year-over-year through three quarters of 2017, but none have managed to increase so dramatically as Apple. And of course, none are operating at its scale.

Continued ASP growth?
So the question becomes, can Apple maintain or grow its iPhone ASP in 2018, or has it reached the top of the mountain? There are a number of factors to consider, including some things that are unique to this year’s market. One key question is whether everyone who wanted an iPhone X, and who could afford it, already bought it in the fourth quarter. This seems unlikely. While Apple sorted supply constraints quickly after launch, there were undoubtedly some who looked at early wait times and opted to hold off until the dust settled.

Another new wrinkle this year was Apple’s launch of three new phones instead of two. While the iPhone X has the highest ASPs, the iPhone 8 and 8 Plus also carry high prices and were a major driver in Apple’s quarterly increase. In past years Apple launched two new flagship phones, so we’re in uncharted waters with three, which means even as shipment mix shifts in subsequent quarters, the ASPs may hold nearer to the top than in the past.

Another element is Apple’s ongoing iPhone battery life public relations challenge. During the earnings call, one analyst asked Tim Cook if he felt Apple’s battery-replacement program might incentivize buyers to get a new battery for their existing phone and to hold off on buying a new iPhone. This might impact total iPhone shipments to a slight degree, but as wise folks have noted, these customers probably weren’t on the verge of buying a new top-line iPhone anyway. (Cook said Apple was more concerned about taking care of customers than worrying about its impact on future shipments.)

The bigger question for me is how Apple will price the new phones it will launch later this year. Supply-side chatter suggests that there will likely be at least one new X-class phone with a larger screen than today’s 5.8-inch product. Can Apple sell this phone at an even higher ASP than today’s iPhone X, or will it need to price this larger phone at today’s top-end and lower the price of the iterative 5.8-inch product? Also, do the 8 and 8 Plus get refreshed, or do they stay the same and see a price drop? My gut tells me the company may have maxed out its ability to raise the top-end price, but it has surprised me before so only time will tell.

The next several quarters should be instructive in this regard. If Apple’s ASPs drop significantly over the next six months, indicating a mix shift away from the top end, Apple will have a good sense of what the market will support. In the meantime, we have Samsung’s Galaxy S9 launch to watch later this month. How Samsung markets and prices this phone should be instructive, too.

News You might have missed: Week of February 2, 2018

Earnings Thursday!

Apple

  • Revenue reached $88.3 billion, growing 13% over Q1 FY2017 and above the high end of Apple’s guidance.
  • This quarter was a week shorter than 2017: average revenue per week was up 21%.
  • Apple’s business is growing in all product categories and in all regions worldwide.
  • Apple’s installed base hit 1.3 billion devices in January, up 30% in just two years.
  • iPhone saw its highest revenue ever.
  • Apple returned $14.5 billion to investors during the quarter.
  • In the March quarter, Apple expects revenue to be between $60 billion and $62 billion.

Via Apple 

iPhone

  • 77.3 million iPhones sold in the quarter, down from 78.2 in the same period of 2017. But this was with one week less in the quarter. Had Apple worked on a 14 weeks quarter iPhone sales would have actually been up yoy.
  • Production for the iPhone X started to meet demand only in December
  • iPhone X was Apple’s top-selling model since it started shipping in November all the way through January
  • iPhone 8 and iPhone 8 Plus rounded out the top three
  • iPhone ASP increased to $796 from $695 a year ago, driven mostly by iPhone X. Luca Maestri is guiding to a lower ASP in Q2 due to the change in sell-through mix. He said: “We typically reduce channel inventory for our newest iPhone’s in Q2 because they enjoy very large demand in the initial weeks of sales, which are compounded by the holiday season in Q1. So we anticipate doing that in Q2 this year as well. For ASPs, there’s also another element that we need to consider. As you know, our newest products this year have higher ASPs than they had in the past. And so as a result, as we reduce inventories of these newest products, the overall ASPs for iPhone in Q2 will naturally decline sequentially by a higher percentage than we have experienced historically”
  • Sell through in Q2 expected to be higher than a year ago while sell-in will decrease as channel was prepped in Q1 especially for iPhone X
  • When asked about how the battery replacement offer would impact replacement cycles, Cook said they did not consider that at all when making the decision. I have no doubt that is the case, as maintaining loyalty – that they quoted at 96% in the US – is way more important in the long run than making a sale in the short term. Apple knows that looking after their customers now it will mean an upgrade eventually so it is just a deferred revenue.
  • Apple reached a 1.3 billion installed base a number that is a solid foundation for continued growth in services current and future. Of course, the iPhone is making up the great majority of this number
  • The big question in 2Q will be China as Chinese New Year falls in mid-February and that is always a big driver
  • The ASP increase, even if the extent we have seen this quarter will not be sustained, is quite unbelievable considering where the rest of the market is. It also may be a more realistic metric going forward than units. As more and more market reach saturation being able to upgrade and have users spend more is critical.
  • Replacement cycles came up on the call but Tim provided a somewhat vague reply. From the data I have seen, the cycles are lengthening but by a few weeks rather than months but it is an issue that will impact the whole market sooner or later which raises some questions as to what should vendors to keep upgrades steady. Apple’s annual upgrade is one thing and the numbers for that have been growing in the US quite steadily but still small in comparison to multi-year plans.

Services

  • Services revenue grew 18% yoy reaching $8.5 Billion. Paid subscriptions represented 240M of that – a 58% growth yoy.
  • “Apple Pay is now accepted at more than half of all American retail locations, which includes more than two-thirds of the country’s top 100 retailers”

Apple Watch

  • Apple Watch had a great quarter “with over 50 percent growth in revenue and units for the fourth quarter in a row, and strong double-digit growth in every geographic segment. Sales of Apple Watch Series 3 models were also more than twice the volume of Series 2 a year ago” Of course while the performance is clear from the growth in revenue it is still hard to point to an actual volume number as all the numbers that Apple is comparing the latest performance to are not public. So basically they are growing 50% in units over a number we do not know.
  • It is fascinating to me how Apple is able to talk about wearables now as it includes AirPods and brands like Plantronics have been unable to ride on the back to the excitement that this segment name had created a few years ago. Of course by now wearables pretty much only means Apple especially when you are talking about smartwatches.

Mac and iPad

  • These two products did not get much love during the earnings call but they both performed ok especially with iPad growing both in volume and ASP.

Overall, Apple remains an incredibly strong company that is solidifying the foundation for future growth outside of hardware.

Alphabet

Alphabet’s Q4 results versus Wall Street’s expectations, per Bloomberg:

  • Net revenue:$25.9 billion vs. $25.6 billion expected
  • Net Loss: $3.02 billion (due to one-time charge of $9.9 billion due to tax changes)
  • EPS (adjusted):$9.70 vs. $10.04 expected

Via Alphabet

  • Traffic acquisition costs (TAC) was $6.45 billion, up quite a bit from the $4.85 billion in TAC a year ago. TAC is what Google pays to third parties like Firefox and Apple so web searches on those platforms direct to Google. The growing TAC is one of the biggest concerns with Google’s advertising business. TAC as a percentage of revenues was the highest it’s been in at least two years.
  • Dai Wakabayashi at The New York Time linked the growth in TAC to my previous point about the value of the 1.3 billion Apple ecosystem in this article “Alphabet’s Earnings Disappoint. Blame it on the iPhone ‘

Amazon

Full-year revenue came in at $177.9bn (£124.6bn), a rise of 31%, while profit hit $3bn, against $2.4bn in 2016. The company reported record sales in the final three months of the year, driven by a surge in online shopping over the holiday season and demand for its cloud services. Amazon Web Services, the cloud services business, accounted for $5.1 billion in revenue for the quarter. This is up from $3.5 billion in the same timeframe the previous year. Physical stores revenue, which primarily comes from Whole Foods, came in at $4.5 billion. This is the first full quarter to have Whole Foods revenue included in Amazon’s sales

Via Amazon

  • Online shopping continued to drive Amazon’s revenue especially thanks to holiday shopping. Cyber Monday was the biggest shopping day ever.
  • In 2017, the highest number of users joined the ranks of Prime Members. Similarly to the Apple installed base, these are captivated users that will continue to spend within the ecosystem
  • We did not get many details on Amazon’s branded hardware that was expected to contribute heavily to the holiday quarter revenue. We did, however, get a promise that Amazon will double down on Alexa – if that was not clear by all the devices Amazon launched in 2017 as well as how Amazon is making sure Alexa is the intelligence in the home, the office, and the car.
  • Similarly to Apple Watch, I wonder how long it will take for Amazon to mention some hardware numbers. Fire TV and Echo in particular. Putting some hard numbers in these two segments will only increase momentum in my view.