News that Caught My Eye: Week of October 5th, 2018

Surface Headphones

This week in NYC, Microsoft announced the Surface Pro 6, the Surface Laptop 2 (both of which with a black variant), the Surface Studio 2 and the Surface Headphones. It is the Headphones I want to concentrate on because it has been the most misunderstood product in my opinion.

Via Microsoft

  • I have seen many criticizing Microsoft for entering this space due to:
    • The dominance of well-established brands like Bose
    • The narrow margin nature of the business
    • The declining sales as consumers shift to wireless earbuds à la AirPods
  • While the above three points are facts and should deter any new player from entering the headphones business, I really do not think they affect Microsoft.
  • Surface Headphones are a Surface companion. This means they are first and foremost targeting Surface users who have bought into the brand.
  • Surface has established itself as the Windows alternative to Apple Mac/MacBooks. This means that it can command a premium on its products. That said, I would expect to see offers that bundle the Surface Headphones in with a Surface.
  • The Surface Headphones are also a vehicle to increase engagement with Cortana and Skype.
  • Some believe that Microsoft should abandon Cortana and fully embrace Alexa but such a view dismisses the role that the digital assistant plays in the broader analytics game. Microsoft must continue to find ways to increase engagement with Cortana and these new Headphones are just one way.
  • Microsoft started with a design that highly complements a PC usage, especially for the kind of audience Surface, as a brand, is targeting. Over time I could see Surface expanding its portfolio possibly looking at earbuds with added sensors that could be used for health/fitness. Although the Microsoft Band was killed, Microsoft learned a lot about fitness and health and the tech they had does not have to be limited to a fitness band to be useful.
  • Over time I would also expect to see more colors. For now, despite some criticism, Microsoft stayed true to the first Surface, something the addressable market will appreciate.

LG V40 and Its 5 Cameras

LG’s new $900-and-up V40 ThinQ is different, however. In addition to a better standard camera than its predecessors, the V30 and the G7 ThinQthe V40 has two additional rear cameras, which provide different perspectives and fields of view. In total, the V40 has five different cameras: three on the back, and two on the front, which give its camera system a level of versatility that other phones don’t offer.

Via The Verge

  • Huawei started the camera “mine has more than yours” race and now LG is getting ahead with 5 cameras on a single phone.
  • Smartphone vendors are struggling to differentiate and for those who do not control the OS experience, the camera, which is one of the top features driving purchase, is the most natural place to focus on to drive differentiation.
  • I like LG’s approach, although I have not tried the phone first hand, because they did not just add two cameras and replicate what Huawei did, which is using the cameras to improve the quality of the picture by adding more detail. LG is using the three cameras to deliver three separate experiences a bit like when you carried a digital camera and carried different lenses for it.
  • The LG V40 has a standard camera for normal shots, a super wide-angle camera for capturing a wider field of view, and, in a first for LG, a telephoto camera to get closer to your subject.
  • From reading early reviews, however, the results are not as encouraging as one would have hoped and it seems that the reasons are to be found in the software and hardware choices.
  • With such a system in place and the big focus on AI, you would expect LG to have implemented an intelligent mode detection which would suggest which camera to use for the shot. LG already had something similar in previous phones where for instance the camera would suggest a “food” mode for those #cameraeatsfirst shots. Why not apply this on the V40 rather than relegating the cameras to become more of a gimmick than a real tool?
  • This is the sword of Damocles for many companies who can do hardware but still struggle with software and more importantly who look at the top line differentiation and they fail to deliver because cost control stops them from implementing the right hardware.
  • Unless innovation really brings value to customers and not just cheap thrills, sales will see blips rather than a sustained growth driven by loyalty.

Twitter Is Losing its Battles against Fake News

Knight Foundation researchers examined millions of tweets and concluded that more than 80 percent of the accounts associated with the 2016 disinformation campaign are still posting — even after Twitter announced back in July that it had instituted a purge of fake accounts

 Via NPR 

  • Needless to say, this is bad news for Twitter and Jack Dorsey who had recently answered questions on Washington precisely on what the company is doing to minimize the potential impact of fake news on the mid-term elections.
  • The study found that more than 60% of Twitter accounts analyzed in the study showed evidence of automated activity. Basically, 60% of the accounts the study looked at were bots. Many of these accounts were also found to be following each other, which would suggest that they share a common source.
  • Twitter’s response to the study was that the data fails to take into account the actions Twitter takes to prevent automated and spam accounts from being viewed by people on the social network.
  • So basically Twitter is saying that the problem would be even bigger than the study shows if it were not for what the company has put in place to limit fake news.
  • While Twitter might think this is a good defense line I am not convinced it is. To me this points to a problem that might just be too big for Twitter, or other social media platforms for that matter, to solve.
  • I am afraid I do not have the answer on how we can win this battle, but I do think we sometimes forget that these platforms are being exploited and while it is their responsibility to protect themselves and their users we should also try and understand why this is happening and who is behind it.
  • Normally I would say that educating the public to spot fake news should be a focus of these brands as well while they try and eradicate the problem. Like you do with kids, you cannot take all the bad guys away but you can teach your kids to spot them and be prepared. Sadly I think that in this case, most of the public does not want to learn how to avoid fake news whether they are spread by bots, press, or politicians.

HP and Microsoft PCs, The PC Evolved, Android + Windows and Avoiding Platform Disruption

I tweeted yesterday a chart I love to show, and always gets great reaction on Twitter, with the statement “the PC is alive and well and it comes in many shapes and sizes.”

I could spend a good hour or more talking about why understanding the various roles personal computers play in humans lives and why vast swathes of people have different needs and desires and therefore their needs vary and sometimes a small computer is all they need, and sometimes they can’t do their job without a large screen PC. The point is, the market is so mature at this point that consumers fully understand what they need and what they don’t. They are wise enough to know what things they value and what things they don’t. This is why we see a great deal of hardware, software, and overall feature differentiation. The landscape of personal computing has broken wide open into many slices of a big pie. Everyone is competing for their slice, and some slices are larger than others.

In light of that point, we see the broad evidence of extreme market maturity in both HPs and Microsofts hardware launch events from this week. HP launched a new laptop that is bound in leather and looks, as well as I assume feels, extremely nice. Microsoft continues to evolve the Surface strategy by making impressive upgrades to previous products and launching new premium colors and finishes to Surface hardware. These are classic examples of designing hardware with specific segments of the market in mind and not the entire notebook/desktop market as a whole. This is an important distinction of how the market has evolved and how the players are looking to compete.

In the good old days or the PC market, PC companies were making notebooks and desktops with the entire market in mind. Effectively, they were trying to compete for the whole pie, not just a slice. Interestingly, this was never true of Apple who always had their eye on a specific market and customer type. That strategy and laser focus from Apple paid off once the market segmented but it took much longer than they expected. Now, everyone is designing with specific segments of the market in mind, and we should expect that to continue for the foreseeable future.

This is how the personal computer is evolving. And it is important to know it is a constant and continual evolution. Touch and pen are the newest features of the PCs evolution, and neither has fully reached its full potential as a part of humans everyday productivity and creativity workflows. What companies like HP, Microsoft, and Apple are up against in the consumer space and commercial to a degree is the simple fact of behavioral debt. While much of this new hardware is capable of new and amazing things with the use of touch and pen, the reality is old habits (workflows) die hard, and it takes some serious effort to get people to embrace new ones. So while there is optimism humans will be empowered to do new things with these new tools, or existing things quicker and more efficient, it is simply going to be a slow process.

Android + Windows
Something else I find interesting as a strategy for Microsoft is how they have embraced Android in a way that they position Google’s platform as best companion to a Windows PC. Now, it is worth noting that if Apple’s iOS platform was as flexible and customizable as Google’s, Microsoft would try the same thing and deepen their hooks for Windows to iOS in the same way they are with Android. But Apple’s platform is much more tightly controlled and thus limits the depth Microsoft can create software hooks for Windows customers.

While a significant portion of Windows users are iPhone owners, I find Microsoft catering to Android customers interesting strategically. While Microsoft was, and in some cases still is, hostile toward Google a strategy they should have integrated long ago was to use Android’s openness to their advantage and attempt to usurp the platform for their benefit. This is essentially what they are now doing with Android and had they done this long ago, instead of buying Nokia, I think their position as a services player on mobile devices would be much farther along than it is today. Hindsight is 20/20 I know, but the nature of Android being more open should have been viewed more as an opportunity than a threat by Microsoft even back then.

What I’m getting at here is actually a fascinating point about the nature of open systems which threaten to displace or disrupt incumbents. Microsoft used to have 97% share of all computers sold every year. That number is now less than 10% mostly because of the ~1.3 billion Android smartphones sold every year. You could argue Microsoft was displaced in mobile because of Android and you are likely correct. But it is worth pointing out that only an open platform had a chance to displace the previous open platform. This is because an open platform enables a vast array of hardware companies to run their software. Which means, if I’m an open platform like Windows, and I get the sense another open platform is about to displace me, I should embrace that open platform as soon as possible and attempt to usurp it for my own gains. I know this goes counter to a lot of business theory and I understand why Microsoft did what they did. However, in this case, the nature of the threat from an open platform meant that they had and still have every opportunity to leverage the very thing that makes it possible for them to be displaced, which is the fact the threatening platform is indeed open.

This movie will play out again, and I’ll be fascinated to see how quickly the incumbents learn from history.

Verizon Campaigns Confusion with 5G Internet Service

This week Verizon became the first company to deploy a 5G service for consumers. Rolling out in Houston, Indianapolis, Los Angeles, and Sacramento, it is called Verizon 5G Home and promised to bring speeds “up to 1 Gbps” for internet access over cellular wireless technology. Service should “run reliably” at around 300 Mbps, peaking at that 940 Mbps level during times of low utilization and based on your homes’ proximity to the first 5G-enabled towers.

The problem is that Verizon 5G Home is not really a 5G technology. Instead, Verizon admits that this configuration is a custom version of the next-generation network that was built to test its rollout of 5G in the future.

It is called “5G TF” which includes customizations and differences from the 3GPP standard known as 5G NR (new radio). As with most wireless (or technology in general) standards, 5G NR is the result of years of debate, discussion, and compromise between technology companies and service providers. But it is the standardization that allows consumers to be confident in device interoperability and long-term success of the initiative.

5G TF does operate in the millimeter wave part of the spectrum, 28 GHz to be exact. But 5G isn’t limited to mmWave implementations. And the Verizon implementation only includes the capability for 2×2 MIMO, less than the 4×4 support in 5G NR that will bring bandwidth and capacity increases to a massive number of devices on true 5G networks.

Upcoming 5G-enabled phones and laptops that integrate a 5G NR modem will not operate with the concoction Verizon has put together.

Verizon even admitted that all of the 5G TF hardware that the company is rolling out for infrastructure and end user devices will need to be replaced at some point in the future. It is incompatible with the true 5G NR standard and is not software upgradable either. From an investment standpoint you can’t help but wonder how much benefit Verizon could gain from this initiative; clearly this will be a financial loss for them.

But what does Verizon gain?

The truth is that Verizon is spouting these claims for the world’s first 5G network as way to attach itself to a leadership position in the wireless space. Marketing and advertising are eager to showcase how Verizon is besting the likes of AT&T, T-Mobile, and Sprint with a 5G cellular rollout in the US, but it’s just not accurate.

Take for example the AT&T “5G Evolution” that was actually a 4G LTE service with speeds up to 1.0 Gigabit. An amazing feat and a feature worth promoting, but the carrier decided instead to message that it was part of the 5G transition.

Both of these claims do a disservice to the true capability and benefits of 5G technology while attempting to deceive the us into believing each is the leader in the wireless space. As a result, consumers end up confused and aggravated, removing yet another layer of trust between the customer and service providers. For other companies that are taking care with the 5G story, whether it be competing ISPs or technology providers like Qualcomm, they suffer the same fate through no fault of their own.

These antics should come as little surprise to anyone that followed along with the move from 3G to 4G and to LTE. Most insiders in the industry hoped that we had collectively learned a lesson in that turmoil and that 3GPP might be able to help control these problematic messaging tactics. Instead we appear to be repeating history and it will be up to the media and an educated group of consumers to tell the correct story.

Why Marketing to Millennials Matters

I recently started looking at a study by GraphicsSprings that researched millennials brand recognition of six major IT companies. The focus of the study is on the impact these companies logo’s had on the demographics it studied, but I would argue that what the company does and how it impacts these customers are why these brand logos do or do not resonate with these different age groups.

Here is the chart the that highlights their findings:

“The table below compares the generational recognition differences between Millennials and Baby Boomers of 6 IT companies which feature in the top 200 global corporations, all of which hail from The United States:”

“The results above are a clear indicator of how brand recognition changes from generation to generation. Dell Technologies, for instance, is recognized by 80% of Baby Boomers, but only 45% of Millennials in America, whereas Apple is universally recognized, no matter the age of the respondents. Additionally, Microsoft is 90-95% recognized by those born from the 1940s to 1985, whereas it drops down to 75-85% for Millennials in US and Europe, suggesting that Apple reigns supreme with the younger generation in these regions. Interestingly, recognition for Microsoft in Asia remains high across all generations.”

Not surprisingly, Apple has 100% brand recognition across all demographics in regions studied. But millennials view of all others is mostly down compared to baby boomers. I see this shift in millennials view of these brands as being problematic for these companies. If they are not careful, they could descend into being looked at by millennials, and Gen Z, as their parent’s tech companies and less relevant to them over time.

Part of the issue is that at the hardware level, only Apple really gets the attention of these younger generations. The iPod struck a huge chord with many millennials when they were in their teens and for the Gen’s Zer’s, the iPhone delivers their music to them on demand. Add the iPad, and the Mac’s and Apple Watch to this and Apple has a suite of products that set them apart from other tech vendors and in these younger eyes, it makes Apple cool.

Although the “cool Factor” seems to be a key driver in how millennial ’s view these companies, how these products look and feel and help them with their status among peers is another important thing to consider. I was shown an internal report from a company who looks at younger generation buying preferences and study’s millennial’s technology buying trends. I was surprised how high products boosting their status among friends was on this list. In fact, in this internal survey, it ranked in the top 5 considerations.
Millennials also need to trust the brands they buy from.

Eileen Brown of ZD Net summarized a research study from IT Community Spiceworks that addresses this issue:

“Austin, Texas-based online IT community Spiceworks reached out to almost 700 IT buyers in organizations across North America and Europe during March 2018 to examine different generations of IT buyers. The results show that a different mix of brand and product attributes influence millennials who will respond to different tactics to engage them.

Around 85 percent of respondents said that they need to trust a tech brand before making a purchase. Over half (57 percent) of IT buyers prefer to purchase from tech brands that focus on building a relationship compared to those looking for a quick sale. This is reinforced amongst millennial IT buyers (born 1981 to 1997), where 34 percent said they need to have personal experience with a brand, such as an email exchange or in-person encounter, before making a purchase.

Only 17 percent of baby boomers (born 1946 to 1964) and 25 percent of Generation X respondents (born 1965 to 1980) felt the same way.
Millennials seem to be less responsive to impersonal marketing tactics such as cold calls, direct mail, and mass emails.
Meaningful brand relationships and personal brand experiences are also more important to millennials than older Generations.
Millennials are more likely to be influenced by their personal tech preferences — 65 percent believe the technologies they purchase for personal use influences the technologies they purchase for their organization, compared to 55 percent of Generation X and 57 percent of baby boomers.

Industry buzz will prompt 17 percent of millennials to purchase a new personal device compared to 10 percent of Generation X and only 8 percent of baby boomers.”

One other chart from the Spiceworks study is particularly interesting. It points out that 26% of millennials want the company they buy from to align with their values.

Millennials represent 80 million people in the US and most are already in the workforce. More importantly, they are big consumers of technology at the personal and professional level. Tech brands need to really be aware of this age demographic and look closely at making products that not only meet their needs but in some ways are also cool. Apple nails this with their products and other tech companies need to be more vigilante in creating products and services that this demographic’s needs but also really want. Otherwise, they could be thought of as their parent’s tech company and not be in the running when they pull the trigger on the products they buy for personal or even business use.

Apple Watch Series 4 to Drive Strong Upgrade Cycle

When I first saw the new Apple Watch presented at the Steve Jobs’ Theater I immediately said it would drive a strong upgrade cycle, and now we, at Creative Strategies, have brand new data from a study we conducted across 366 current Apple Watch owners in the US the week leading up to in store availability. The study was an international one that cut across several geographies touching a total of 557 consumers. For this article, I will focus on the US data only.

Our panelists were self-proclaimed early adopters of technology with 64% of them owning an iPhone X. Eighty-Four percent of the people who answered our online questionnaire were men, very much in line with the average composition of the early tech adopter profile.

Apple Watch Served its Base Well from the Get-go

Our panel owned a good mix of models: 41% has an Apple Watch Series 3 with Cellular, another 13% owns an Apple Watch Series 3 Wi-Fi only, and 15% has a Series 2. What was a surprise, considering how early tech this base is, was to see that 30% still owned an original Apple Watch.

One might argue that maybe the reason why these users are still on the original Apple Watch is that they are not very engaged with it. The data, however, says otherwise. While they are not as engaged as Apple Watch Series 3 owners they share their love for the same tasks: decline calls, check messages and check heart rate. The most significant gap with owners of more recent Apple Watch models is in the use of Apple Watch as a workout tracker. Here original Watch owners lag Watch Series 3 owners: 62% to 76%.

Satisfaction among original Apple Watch users is also strong with 93% of the users saying they were satisfied with the product. While 93% is a lower satisfaction number than Watch Series 3 with cellular at 99%, we need to be reminded that the original Apple Watch was introduced in 2014. Satisfaction at 93% for a four-year-old product is quite impressive.

When we reached out to a few panelists to ask why they did not feel compelled to upgrade so far, they mentioned that software updates and battery life kept them happy and that it would be a change in design and compelling features that will drive them to look at a new model. In other words, the original Apple Watch was still serving them well.

Strong Intention to Upgrade

Apple Watch Series 4 seems to hit both upgrade requirements for original Apple Watch owners as 76% say they plan to upgrade with 41% who have already pre-ordered while another 32% plan to do so in the next three months. When asked to select the most compelling new features that made them interested in upgrading and the faster processor was mentioned by 80% of the original Watch owners. This was followed by the bigger screen (75%) and the ECG (61%).

Apple Watch Series 3 owners are the same but with different priorities. The larger screen is the most important driver, followed by the faster processor and the ECG. The intention to upgrade is also more cautious with 29% saying they are planning to upgrade (54% already having preordered) with some users being concerned about using the old bands on the new model and some uncertainty on which size they would prefer.

Early Tech Users find Gifting Difficult

We have discussed before that early tech users seem to find gifting new tech hard and Apple Watch owners on our panel are precisely like that. When we asked if they were planning to buy the new Apple Watch Series 4 as a gift only 26 percent said they were. This is despite Apple Watch commanding a Net Promoter Score of 72 among panelists. Among the users who are planning on gifting Apple Watch, 51% will give one to their wife, and another 16% will give one to a parent. When asked which features are motivating the purchase for someone else, four stood way above everything else: larger screen (49%), ECG (45%), and faster performance and fall detection (both at 39%).

Among those intending to gift, 22% already preordered and 48% plan to buy within the next three months.

The Apple Watch User Base is Deep into the Ecosystem

Probably the most fascinating finding of this study is to see how entrenched in the ecosystem Apple Watch users are. While many could see Apple Watch as an accessory, I firmly believe that users who are looking at it as an essential tool to manage their day and their ecosystem of devices and services are the ones who get the most return on investment. Not surprisingly, multi device ownership across the panel is quite high: 88% owned an Apple TV, 75% owned Air Pods, 71% owned a MacBook Pro, 67% owned an iPad Pro, 44% owned an HomePod.

Early tech users are a window into the future, which is why it is so valuable to study them. While the time to turn from early adopters to mainstream users might vary, I think this ownership data best illustrate what Apple is working on when it comes to its user base. I have been saying for years that Apple cares more about selling more products to the same users than just expanding its overall market share in one area. As Apple moves more into services, it will be the combination of products that are present in a household that will drive engagement and loyalty and build an audience for life.

 

Facebook’s New Reality and The Case Against Hyper Scale

Facebook was once viewed as invincible. When chatting with VCs, investors, and many in the business world, there seemed no safer bet than Facebook’s hyper growth business. It seems the invincibility of Facebook carried over into the mainstream media given the amount of backlash we received when we published some recent research on US adults that suggested 9% of US adults on Facebook had deleted their account and 17% had deleted the app off their smartphone. It was entertaining to watch the numbers of people on Twitter respond to our research saying we were crazy or the data was flawed. Most of this coming from people who have no idea how to do research or the details of solid methodology but that is a different discussion.

But it isn’t just our research confirming some changing behaviors of US adults and their Facebook usage. This report from Pew Research, which was conducted a few months after our research, not only confirms many of our same findings but has even more US Facebook users reporting deleting the app of their smartphone than ours did with 26%. Interestingly, it isn’t just ours or Pew’s research confirming thus, but I’ve read no less than three private studies from Wall St. investment banks all turning bearish on Facebook citing their research showing a change of behavior amongst Facebook users.

I have no doubt something is changing amongst Facebook users, and Facebook knows it. Anyone using Facebook is seeing regular Facebook messages in their feed attempting to assure users that Facebook is doing everything it can to preserve the community, focus on people and relationships, making sure toxic material is not used to divide, and a host of other positive messages. Facebook would only do this is they are very concerned and seeing data that suggests US users are becoming less engaged.

After we published our research, we shared a number of charts, but the one below is one we have not shared before. I picked certain answers from this overall sentiment question to save space and focus on ones that I believe are the most telling. (Click the image to enlarge)

Personally, I’m not sure Facebook recovers from this. It certainly doesn’t help that they keep having issues with hackers and data breaches. At a high-level the little trust users had in Facebook is quickly eroding. But the reality for Facebook is their users aren’t actually going anywhere. While I do think there will still be small pockets of people who stop using the service, I think the big impact will be people using it less. But I do not think there is a large scale opportunity for an alternative to Facebook. People have spent years building their network on Facebook, and there is too much sunk costs in Facebook in terms of human time. Staying in touch with friends and family remains the single largest indicator of US consumers motivation to keep using Facebook, and that isn’t going to change.

Facebook used to be a place where US consumers would spend a significant amount of time browsing, killing time, and generally using Facebook as a source of entertainment. All of that has changed, and I have strong doubts consumers will go back to that behavior. Instead, Facebook usage may be driven by events or moments in time instead of general daily browsing. Things like a loved one got married, someone you know is sick or dealing with a trial, or just something very personal you want to stay informed about that is happening with a friend or family member. These more specific events seem like more logical drivers of Facebook usage than the prior behavior of using Facebook being fun or entertaining since it is generally no longer such a place.

Making a Case Against Hyper Scale
In big-picture conversations about the future, I tend to make a point that sometimes seems controversial. I’m not sure we will ever see a company with multi-billion users like Facebook or Google again. I know one should never say never, but my hunch is the kind of scale Facebook has achieved is unique and the result of a point in time not the new normal.

Every company who reaches hyper scale, which is a billion users or more, becomes the target of malicious attacks. Some companies, like Apple, handle this better than others but just look at Microsoft, Facebook, even Google as examples of bad actors looking to exploit their platforms. This is inevitable when a platform has so many users, and as Facebook has proven, this kind of scale is not for everyone.

While I make the case that we may not see a company with such scale ever again, I talk with many startups and investors for whom this kind of scale is the goal, and often I argue it should not be. The hindsight view of such scale companies is that too often the compromises are not worth the costs. Sure, some people may make a great deal of money, but there are many negatives beyond that. Once a company gets to that scale, and is public, it then faces financial pressure that is honestly unhealthy and sometimes dangerous. There seem to be much more positive stories and customer experiences around companies who focus on owning their niche and profiting greatly from that niche. The problem is the VCs, investors, and often entrepreneurs crave that, but it comes at a cost and always will.

Are Leather and LTE the Future of PCs?

As technology evolution and maturation continue to move forward, many PC and other device companies emphasize the experience of using their products as key to their design philosophy. The goal, they say, isn’t just to deliver on the key technical requirements and other specs necessary to provide good performance, but to make the overall encounter with their devices engaging and inspiring.

Few, if any, however, have taken the experience concept to the level that HP Inc. has done with their new Spectre Folio convertible PC design. How about a PC that you actually want to smell? Thanks to its very attractive leather-based design, the company has managed to create an elegant, premium feeling and, oh yeah, pleasant-smelling notebook computer that also incorporates an intriguing new take on convertible designs.

Rather than simply wrapping a notebook in leather, HP has actually built the Spectre Folio into the leather casing in a way that makes it an integral part of the device. The end result on the outside is a device that has the smooth, wonderfully tactile sensation that leather provides on quality briefcases, handbags, portfolios, and other non-tech products. Inside, however, is a fan-less, 0.6” thin PC design—driven in part by the non-porous nature of leather—that still manages to incorporate Intel’s new 8th generation Amber Lake 5-watt Y-Series CPU designs (both i5 and i7 versions are available), 802.11ac WiFi, up to 18 hours of battery life, a 13.3” 400-nit display, and an option for a 4K screen. It’s a tremendous mashup of both old-world craftmanship and cutting-edge technology. At a starting price of $1,299, it’s not a cheap offering, but it’s in the range of what you’d expect to pay for a premium device.

Another interesting aspect of the convertible design on the Spectre Folio is the ability to pivot the bottom of the screen forward into a tent mode that’s much easier to do than on typical, hinge-based designs (and doesn’t require the screen-switching hassle, either). So, if you want to watch a movie on a plane, or present slides to someone nearby, you can easily do so, and still leverage the touchpad, which is actually a nice detail of the design. Like many convertibles, the Spectre Folio also ships standard with a cordless pen with 4,096 points of pressure. One additional convenience, however, is that it fits neatly into the pen loop built into the side of the leather casing, making it less likely (at least theoretically!) to be lost.

In addition to its luxurious design, the Spectre Folio offers another intriguing connectivity option: an Intel-built LTE modem offering up to 1 Gigabit download speeds. Of course, with the Always Connected PC initiative, Qualcomm has been banging the drum of cellular connected PCs for a while now, and PC companies have offered integrated modems for years. Despite both these efforts, attach rates for LTE-equipped PCs have remained very low, due in part to the additional cost of a monthly data plan, as well as the ease of using integrated hotspot capability in today’s smartphones.

While none of these issues are completely going away with the Spectre Folio—though HP and Intel announced a special deal with Sprint that offers free cellular service for 6 months when you purchase one—another issue is starting to become a bigger concern: security. With the rising awareness of the potential vulnerabilities in public WiFi networks, many individuals and businesses are started to reconsider their connectivity choices and looking seriously at the private, single device connections offered by LTE cellular networks. I certainly don’t expect to see a massive shift occur anytime soon, but if there’s anything that’s going to make integrated LTE a more attractive option to some, it’s security that could start to shift the tide.

One nice detail of the Spectre Folio LTE implementation is that it includes both support for a physical SIM card and an eSIM. Many US carriers have been somewhat reluctant to support eSIMs in the past because of the potential ease of switching between carriers (they enable “digital” switching instead of having to get a new SIM). However, now that Apple added eSIM support in their latest line of iPhones, the tide of carrier support for them is already starting to change.

The new HP offering represents an intriguing new option for the premium PC market. While it’s easy to write off the leather-wrapped design as little more than a gimmick, the ability to bring an appealing physical experience to a quality digital experience is likely something that many demanding PC users are going to find attractive. I also wouldn’t be surprised to see it inspire a raft of competitors that offer similar physical advancements—especially given the overall device experience focus that so many PC companies now have. Given the more evolutionary advancements now occurring in PC technology, it just makes sense to bring new tactile improvements to our everyday computing.

Could VR Ever Become a Mainstream Consumer Technology?

Over the last year, I have written many times that I believe AR is the major technology that will gain the broadest acceptance by a consumer audience. Up to now, VR has mostly struck a chord with gamers and for use in vertical markets where it is used to visualize new designs in automobiles in a VR/3D environment, for mfg prototypes and numerous applications where VR solves a specific problem.

Even when Microsoft gave us Hololens, the major focus of their apps was education based, also considered a vertical market by many. It did have some games, and it got much attention as a mixed reality headset, but the emphasis was more VR than AR.

I have felt that AR or a form of mixed reality that skews more to AR functionality is the technology that would gain the most significant interest by consumers over time. I still think that AR can, with special glasses and an easy to use interface that includes voice and gestures, will become the technology that gains the most considerable market acceptance by consumers.

However, there are some developments in VR and especially consumer-focused VR headsets like the new Oculus Quest that was introduced last week, that suggests VR could become more of a consumer product over time. I am not sure it will gain a billion users as Mark Zuckerberg suggested last week at the Oculus Developer conference though.

For the past two months, I have spent much time with two VR headsets. The DayDream based model from Lenovo called the Mirage and the Oculus Go. These headsets are stand-alone VR headsets and cannot be used for AR or as mixed reality headsets. They are relatively low powered devices but can deliver a broad 3D/VR experience with low-quality video resolution. When I first started using them, I was not sure what to expect. At the moment, the VR and 3D content are very limited. While the 3D games are fun and I enjoy some of the nature documentaries, I find myself using it more in a 2D mode and watch things like Facebook videos, Netflix, Hulu and other content in which you can view these videos as if you are sitting in front of a big movie screen theatre.

I was over at the Oculus Developer conference last week and was pleasantly surprised to see how many thousands of developers came to this event and seemed to be willing to create content for the Oculus platform. More importantly, they are genuinely excited about the Oculus Quest given that its $399 price point will make it more consumer friendly. However, keep in mind, this is a closed headset for VR and cannot be used as a mixed reality headset. However, if an immersive VR experience is what one wants and developers support it with thousands of true VR apps, this headset could be a big hit.

I also spent some time in a private suite while at the developer’s conference looking at an app in the works on a Magic Leap headset. Although Magic Leap is way too expensive for consumers, it is clear that its approach to bringing VR and AR or mixed reality to the market could be the better way to get consumers to adopt this important new technology in the future. In fact, should Magic Leap ever get into consumer pricing ranges, its potential could be huge.

Watching how the Oculus Quest performs when it comes to market next year will be important to watch. It will need thousands of apps that really gain the interest of consumers for it to gain a broad audience even at $399. While I do think VR headsets like the Oculus Quest has potential, I just don’t see dedicated VR headsets and VR specific applications being what gains the most significant consumer audience compared to what I believe AR or mixed reality headsets can in the future.

These dedicated VR headsets seem more like appointment based products. I would use them for virtual room meetings with friends, for educational purposes, VR and 3D movies, etc. But I would not wear them when walking around as you might with AR or mixed reality glasses. That is why you have not heard Apple talk about VR. Their focus is on AR and its potential mass market appeal. I believe Apple will enter the AR glasses market and tie it to their overall ecosystem of products and services and ultimately be the one which defines how AR gets adopted.

After looking at the Oculus Quest and seeing thousands of developers willing to support it, and expecting Google and their partners to create a Daydream competitor at this same price point and functional level, VR could gain broader consumer interest well beyond its acceptance in vertical markets today. I just don’t believe it will get a billion users to buy in. The majority of users who adopt AR/VR via mixed reality or dedicated AR glasses will come from a headset that can be worn anytime and anywhere and deliver AR and VR lite applications that enhance real-world experiences, not one’s just isolated to viewing from a fixed headset designed more for appointment based applications.

Microsoft’s WVD Demonstrates A Smart, Evolved View of Windows

There was quite a bit of news out of Microsoft’s Ignite conference in Orlando this week, but its announcement of Windows Virtual Desktop (WVD) was one of the most monumental to my mind. WVD represents not just a strong product offering in an increasingly competitive space, but it also reflects the much smarter, more evolved view of Windows that Microsoft has embraced under current CEO Satya Nadella.

Virtual Desktop Basics
Virtual desktops aren’t new, and Microsoft’s partners–and competitors–have been offering access to Windows in this manner for years. In short: A virtual desktop is one that runs on a server or in the cloud, accessed via a client app or browser on a client endpoint. In the past, these endpoints tended to be low-cost thin client hardware, which companies used to replace more costly PCs. Today, pretty much any device with a browser can act as a virtual desktop endpoint, from phones and tablets to non-Windows PCs running Google’s Chrome and Apple’s MacOS.

Virtual desktops are one of those technologies that has always looked great on paper, but often disappointed in practice due to their high reliance on a stable, fast network connection. As network speeds and quality of service has improved over time, and as LTE has become more prevalent, virtual desktops have become increasingly viable. And the pending rollout of 5G should drive even better performance over mobile networks. Over the years, Microsoft partners have rolled out increasingly capable Windows-based offerings, and so have direct Microsoft cloud competitors. Microsoft executives were quick to note at Ignite that existing partners will be able to leverage (and sell) WVD and that its goal with this announcement was to offer a differentiated product that better positions it against competitors such as Amazon and Google.

WVD’s Special Sauce
Microsoft has put together a compelling package with WVD, which will launch as a preview later this year. One of the most notable features is the ease with which current customers can spin up virtual machines and the flexibility around licensing and cost. Existing customers with Microsoft 365 Enterprise and Education E3 and E5 subscriptions can access WVD at no extra charge, paying Microsoft for the Azure storage and compute utilized by the virtual desktops.

Microsoft says WVD is the only licensed provider of multi-user virtual desktops. Multi-user means that a company can provision a high-performance virtual desktop and then assign more than one user to that desktop, leveraging the performance and storage across more than one employee. Microsoft also says that WVD users will access Office 365 Pro Plus optimizations, for a smoother virtualized Office experience.

Finally, Microsoft announced that WVD users will have the ability to run Windows 7 desktops well beyond the January 2020 End-of-Life date. This will allow companies that are behind in their Windows 10 transition, or who are struggling to move propriety apps to the new OS, more time to make the move. For many, this feature alone may represent a strong reason to try WVD.

An Evolved View of Windows
Microsoft’s WVD looks to be a compelling product, and I look forward to testing it out when it becomes available. But beyond the announcement itself, I’m most impressed by what it signifies about the company’s evolution in thinking around Windows. Microsoft under Bill Gates or Steve Balmer could have offered a version of WVD, but it never did. And if it had, you can be sure the company would have charged a licensing fee for every single virtual desktop it served up.

Under Nadella, the company has moved away from Windows as the product that it must sell, and protect, at all costs. Today, it’s willing to offer an easy-to-deploy virtual desktop to existing licensees to drive more customers toward Azure and its Microsoft and Office 365 offerings. Perhaps just as important is the underlying (and unspoken) acknowledgment that the installed base of traditional endpoint devices running Windows natively has likely peaked, while the number of primarily mobile devices running other OSes will continue to grow. By offering a best-in-breed experience that lets companies and employees run a Windows desktop on these devices when needed using nothing more than a browser, Microsoft helps ensure that Windows remains an important business platform well into the future.

News that Caught My Eye: Week of Sept 28, 2018

Oculus Quest

This week at Oculus Connect 5, Oculus announced that in March 2019 they will ship the Oculus Quest. The standalone VR device will be the first wireless Oculus hardware to sport positional tracking, both for the headset itself and the dual hand controllers. The headset will ship with 50-plus games made specifically for the device at launch. The headset will retail for $399.

Via TechCrunch  

  • On paper, the Oculus Quest sounds like the headset that could finally get consumers excited about VR. It delivers six degrees of freedom, a performance that is a little bit better than the Oculus Go, touch controllers all at a pretty aggressive price point.
  • We have known about Santa Cruz for the past two years and it seems that to deliver it to consumers with 50 titles available from day one will still take some time. But one thing is certain: time is not an issue.
  • Consumers are yet to show their excitement for VR due to a combination of factors: high prices for a high-end experience, cumbersome setup, high cost, unclear value.
  • Oculus launched last year its Oculus Go, its first self-contained headset. At $200, many believed it would be the one to put some life into a market that as yet to deliver on the many early promises. Unfortunately, Oculus Go delivered a compromised experience for which $200 seemed more than most would be happy to pay.
  • While the Oculus Quest seems a little underpowered there are certainly fewer compromises this headset offers compared to Oculus Go or the Lenovo Mirage Solo.
  • I was surprised that Oculus decided not to redesign the controllers. I know they are the best in the market, but they still do not deliver a natural experience. Maybe Oculus is keeping the update to more glove-like controllers for the Rift.
  • Oculus might be running the risk to show its cards a little too early especially as far as price point. That said Oculus has created a strong brand in VR and with the possibility of new headsets hitting the market in time for the holidays sharing specs and price for Oculus Quest might encourage some potential buyers to wait.

Instagram’s Co-Founders Leave Firm

Kevin Systrom and Mike Krieger, the co-founders of the photo-sharing app Instagram, have resigned and plan to leave the company in the coming weeks, adding to the challenges facing Instagram’s parent company, Facebook.

Via The New York Times

  • Systrom and Krieger did not give a specific reason for leaving, but it seems to be a common belief that disagreement was growing between the two and Zuckerberg.
  • The item of contention was how Zuckerberg would like to monetize from Instagram compared to what the founders believed their creation should endure.
  • Interestingly, most of the “improvements” made to the service since the acquisition seemed more aimed at helping Facebook by bringing people back onto the platform rather than designed to grow stickiness to Instagram.
  • This is a very crucial time for both Facebook and Instagram. On the one hand, Facebook has been falling more and more under the government’s scrutiny and facing growing dissatisfaction among users. On the other hand, Instagram has been growing in users and engagement.
  • Even more important is the fact that many users, me included, have started using Instragram more and Facebook less. With many users having an account on both services, it is fairly easy to switch engagement and still remain connected with the core base.
  • Instagram seems to have remained immune from most that plagues Facebook, from Fake News to politically charged posts. This might not only drive more engagement but might also provide a more positive environment for advertizing to thrive.
  • It will be very interesting to see what Zuckerberg will do next, now that Systrom and Krieger are no longer there to try and keep Instagram as true as possible to their original vision.
  • As far as Systrom and Krieger and what they will do next, they did not share much but it would be hard to believe they can pull out another Instagram success from their hat.

Samsung Electronics Chairman Indicted on Charges of Union Sabotage

Lee Sang-hoon, the chairman of Samsung Electronic’s board of directors, has been indicted for allegedly sabotaging unions in what prosecutors claim is a violation of South Korea’s labor laws, reports the Financial Times. Lee, who became chairman in March of this year, will face trial along with 31 other executives from Samsung and its affiliates. It’s the latest in a long series of corruption charges against Samsung executives.

 Via The Verge  

  • It has been a long year for Samsung’s leadership, starting with the indictment of the Vice Chairmam Lee Jae-yong for corruption. He was released from prison on a suspended sentence.
  • Then came the reorganization that tried to instill new life into the company with younger leaders appointing Kim Kinam as head of Samsung’s semiconductor and display businesses; Kim Hyunsuk as head of its consumer electronics division and DJ Koh as the head of the mobile business.
  • While these kinds of news make for good headlines they usually have little impact on the choices consumers make on what they buy, especially at an international level.
  • If you are thinking about the impact on company morale, you need to consider culture differences both at a country and company level. When the Vice Chairman was indicted the feelings were very mixed. Some felt it was time for a changed and were appalled by the behavior, but many could not forget that Samsung, at the end of the day, is a cornerstone in the Korean economy and more personally the one that puts food on the table for many families across several lines of business.
  • In today’s world where employees walk out in protest againt a company business decision and succeed in changing it like it happened with Google and their AI project, it is interesting to see how differently America, and to some extent, European companies are judged compared to those based across Asia.

Augmented Realities Killer Use Case

Let me start off by saying I genuinely hate when people use the killer app terminology. I say this because the thing that drives a technology into the mainstream is rarely just one thing as much as people desire to believe it is only one thing. In mobile, for example, the killer app was APPS, not any one app.

I have discussions with company executives, investors, venture capitalists, and more on a regular basis and many of my conversations are about the future. Augmented reality is a subject that comes up in nearly every discussion about where the world personal computing is headed. Naturally, the what is augmented realities killer app always comes up as a question.

The more I’ve thought about this and attempted to connect the dots, the more clear it has become to me that augmented realities killer use case is not going to be that sexy. Pundits and so called futurists often like to point to entertainment, gaming, and other more sexy use cases as the thing that will drive augmented reality. My current conviction is those use cases are not where the mainstream will find true value with AR. Rather, I think the killer use for AR is utility. Good old-fashioned usefulness. Let’s look at a few examples.

I recently came across an app called Memrise. Memrise is a language learning app, and while most of the app has similar experiences to other language learning apps one feature stands out to me. Memrise uses Appel’s CoreML to do image detection of any object and then shows the objects wording in the language you selected. Here are two visuals to help with the illustration.

After spending some time with the app, I was quite surprised how many objects the service recognized and how quickly it knew what the object was and gave me results on how to say it in the language of choice. World travelers will remember similar experiences of apps that translated words in other languages in real-time using the camera on your smartphone. Text translation was quite a bit easier than image recognition to translation, but none-the-less these are real world valuable use cases that any person can immediately understand and find useful.

Another unexpected experience was around Apple’s new measure app. If you are like me and you try to do most stuff around your house on your own, you know you can never have too many tape measures. My wife hates that I almost always buy a tape measure at Home Depot when a good deal on one shows up. Most of my closest friends are in construction and use a tape measure an uncountable number of times every day. I’ve never once worn a tape measure out, and they go through more in a year than I have owned in my lifetime. I showed this app to my brother-in-law, who is a carpenter foreman, and he doubted it’s accuracy. So he went around my yard measuring everything he could, and things that often a tape-measure is tough to use because of length. Even he was impressed with how accurate the app was.Also, while this app is no replacement for a tape measure. There are many times where you need to know roughly the length of something as you plan. See this example, of a current project to build a deck and new fencing in the back part of my house.

I just needed to know a rough idea of the distance from my house to the pasture fence so I could have a rough estimate for my planning of another fence. This would have taken two people and several tape-measurements, but I got it quickly with this app which was extremely close to accurate. I bring this use case up, and the Memrise one to make a broader point. These experiences were quick and useful, and the action itself was not that different from me holding my phone up to take a picture. The challenge I have with gaming use cases, or other more entertainment focused use-cases for AR is they all require you to hold the device for long periods of time for the experience. These more utility-focused AR experiences I’m talking about are much briefer in the interaction model, while simultaneously being extremely useful and solving a pain-point or shining light on a pain-point you didn’t know existed like with the measuring app.

Now, I’ll tell you what I would personally find extremely useful. Some app that I can point the camera and get a rough estimate of somethings weight. Like, for example, this hog.

Because I don’t know if you have ever tried to measure a hog to get its weight but let me tell you it is not enjoyable and borderline dangerous.

While I somewhat jest, I think it a number of these use cases it is important to recognize just how difficult being accurate can be. Also, I do believe that these experiences become more valuable the richer the information becomes. For example, a killer use case just waiting to be solved is an app that you point your camera at your plate of food and get an accurate calorie count. Meaning that it doesn’t just recognize the food on your plate but accurately measures out the portions to give precise calorie count. This seems inevitable and not that far off.

As I said at the start, most of these examples are not sexy, flashy, or the kinds of things that drive headlines. They are, however, things that can solve real consumer pain points in ways that were previously quite difficult or impossible to solve. With both Google and Apple going down this road, it seems that using the camera sensor as a way to get more rich information about the world is the next step in personal computing.

What is this plant? How do I solve this math problem? How much does this weigh? How tall is that tree? How many calories is this meal? what kind of food is this? What does that say? Who is this person? The list goes on but you get the idea. These are all new ways to think about using the image sensor to answer a question. I can see a shift from a question in a consumer mind being answered by pulling up a browser and going to Google to pulling out your camera and pointing it at something. This, to me, feels transformative and sets the stage to make it much easier for a consumer to embrace head-worn augmented reality solutions in the future.

Apple’s Next Frontier

One of the mantra’s of Apple detractors is that Apple no longer can innovate. They point to Apple evolving products instead of breaking any new ground, and some maintain that Apple has not innovated much since the iPhone was released in 2007. They say that the iPad is just an overgrown iPhone. While the Apple Watch does break some new ground for them as a new product, they don’t give Apple many props since it was not the first smartwatch and its roots lie in health trackers that were in the market years before Apple introduced the Apple Watch series 1 product.

The big problem with this non-innovation argument is that it states that innovation is tied to brand new product introductions that define a new category of devices. It misses the concept of using breakthrough processors, camera features and specialized software that does more than evolving a product and gives them new functions and capabilities. At this level, Apple has innovated on all products they have released since the original Mac. What they I think they mean is Apple is not disrupting markets at a pace they would like while overlooking their innovation in semiconductors, design, and software.

It should be noted here that most companies are lucky if they can bring one breakthrough product to the market that defines a new category of products or services. In apple case, they have done this with the Mac by bringing to the market a GUI, Mouse and integrated software. The All-in-one candy-colored iMacs introduced a new computer design that changed the way desktop computers were created. The iPod brought digital music players to the masses. The iPhone birthed the era of the smartphone, and the iPad brought mobile computing to new levels of portability. Apple’s Watch has created the smartwatch category and helped define what it is and can do and is the #1 Smart Watch on the market today. All of these were disruptive and innovative.

Central to Apple’s ability to develop category-defining products and innovate is their control of the hardware, software, and services used to create and deliver these products to the market. As I wrote in last week’s Tech.pinions ThinkTank column, Apple’s vertical strategy in which they even create their processors also plays a big role in their ability to disrupt and innovate.

As one who has covered Apple since 1981 and tried hard to understand what Apple’s real value proposition has been to the industry, I would have to say that introducing new user interfaces has been one of the things they do best. With the Mac, they gave us a graphical user interface and the mouse. With the iPod, they created a new mobile interface that worked on a portable music player. With the iPhone, they introduced touch UI’s for navigation and most recently, added voice as a UI to access and get information from the all of their devices today.

So what will likely be Apple’s new user interface face frontier for disruption and innovation? I bet that it comes with AR and eventually glasses of some type that deliver a more personal approach to using AR apps. While voice will be a cornerstone of the UI, gestures will be the new “mouse” for glasses. With gestures, Apple will extend their role of advancing man-machine user interfaces that drive it into other products and mass market acceptance.

What comes after gestures is interesting to think about. Given that one of Apple’s core competencies is to bring new user interfaces to the market, perhaps the next big frontier will be brain driven interfaces. There is much work being done in this area in the field of medical science and its potential use by people who have disabilities that can’t type, use a mouse or any other normal ways to communicate and interact with computers. Maybe this UI comes out of Apples medical R&D and can be made part of their advancements in user interfaces in the future.

While Apple’s ability to disrupt has been at a slower pace than some would like, one would be hard pressed to suggest that they are not innovating in semiconductors, design, software and user interfaces. How they deliver AR and glasses will probably be their next disruptive act with gestures advancing their role in bringing new man-machine interfaces to the mass market. Moreover, should they ever master brain-to machine user interfaces, the implications for those with medical handicaps and even mainstream users could be dramatic.

I Would not Think Less of Apple for Producing the Next Game of Thrones

Apple has hbeen demonstrating over the years that it has its customers’ best interest at heart when it comes to privacy and security. Apple’s business model, which directly monetizes from hardware and services, makes such a prioritization much easier. Customers’ trust in the brand is inherited.

This week, the Wall Street Journal reported that Apple’s desire to keep its content family friendly had caused delays in the production of a show on Dr. Dre as well as of a drama based on a morning news show starring Jennifer Aniston and Reese Witherspoon.

The idea that Apple wants to stay true to its squeaky-clean image has been circulating as long as rumors of the TV service have been around. Many point to Apple’s history of care for privacy and security to explain why this would be the stand Apple will take.

Despite Apple’s investment in content production, we have yet to get any confirmation of the video service itself, let alone whether Apple will say no to any adult content.

The Content Offered Today

If we look at the content offered today across Apple services, adult content is easily found. Apple Music has songs that contain swear words and sexually suggestive lyrics. Books offers stories of romance, horror, violence. The App Store includes violent games while staying clear of pornography. Of course one could argue that the App Store does allow apps like Tumbler where pornographic material can easily be found.

Consumers navigate what is available across Apple’s services in the same way they would do on Spotify, in a book or video game store. Regulating content, while important, cannot be equated to assuring privacy and security. What Apple has in place for all the current content are ways to clearly label the content that is only intended for a mature audience. From apps to books and explicit music you can easily find settings that help you navigate content and avoid what you feel is not appropriate for you or your children. Music, for instance, has a simple setting that when turned on can exclude all music with explicit language and offers alternative versions when available.

Apple Does not Need to Be Disney

When one starts thinking of content that limits violence, nudity, and bad language, one tends to think of Disney. While you could be tempted to look at Disney and believe that they have been successful, the numbers show that over the past two years the Mouse House has been under quite a bit of pressure. A pressure that eventually had them look at acquiring Fox Entertainment only to lose the bid to Comcast.

While part of the pressure was coming by its parks finances rather than content, it is also true that Disney was starting to fight for its audience. Amazon, Netflix, and Hulu, all offered strong competition with their productions. Apple will face that same competition once it launches its video service which would indicate that not only Apple will have to spend big money but also that it will have to make its service as broadly appealing as possible.

More, importantly, however, I would argue that Apple does not need to be Disney when it comes to content as its audience does not expect them to be. Looking at Apple’s base, there is no doubt families are a big part of it. Users have grown older since their first iPhone, and we know how big iPads are with kids, so Apple offering family friendly content will serve them very well. That said, Apple is also appealing to millennials and the older segment of Gen Z whose interests, according to a bunch of “most viewed shows lists” range from “Everything Sucks” to “Modern Family” to “The Big Bang Theory” and “The End of the F*** World.” So, for a service to appeal to the vast majority of the base, Apple really has to address themes that matter to that audience as well as speak a language they understand.

Brand and Content

There is a difference, in my mind, between airing content and producing the content you are distributing. While both demand some accountability, the latter will be held to a higher standard.

I also believe the audience is quite able to decouple the brand from the show. In other words, I might object to some of the themes in Game of Thrones and decide not to watch it but that does not have me stop watching all shows on HBO.

Interestingly, I feel that while content I might not like will not impact how I think about a brand content I love will undoubtedly grow my affinity with the brand. This is particularly true for stories that cover topics dear to my heart or productions that support women or diversity or other “causes” that might be important to me.

Of course, Apple has yet to outline their vision for content, but I would expect them to focus more on the key value propositions associated with their brand.

Focus on Quality and Management

When I think of Apple, I think of quality and a consumer first approach. I would expect the same from their video service as well. Quality that will avoid cheap use of nudity, violence, and language. This does not mean that I expect Apple to focus on family friendly content only.

Once the quality of content is taken care of, I expect Apple to go beyond the standard classification of content by making it easy for users to assess whether the content is right for them or their family members and manage access of such content accordingly.

If you think about it, this approach would be no different from what Apple is doing with “the big bad internet” where Apple is not censoring behavior, but they are helping me monitor and manage when websites are trying to track me or my information. The approach would also align quite nicely with Apple News where Apple is focusing on delivering quality content and helping users avoid “fake news.”

Time will tell if my hunch of the type of service Apple wants to deliver is correct but I am quite confident that Apple is aware that to win in this game reaching a large audience is key. To quote The Greatest Showman: “ Nobody draws a crowd quite like a crowd.”

Rising Tech ASPs and the Holiday Season

It has been interesting to research consumer spending habits over the last few years in a series of quantitive studies we did. While people in the tech industry may assume that tech represents the largest part of a consumer holiday shopping budget, the reality is it often does not. Most consumers may have one or two major tech purchases planned, but that is generally about it. There are a few implications of much of tech’s rising ASPs may have on the holiday season.

Changing Holiday Spending
One of the many things rising ASP costs in tech could impact the number of products consumers buy during the holiday season. As ASPs rise, and Apple has a big role in this, consumers may buy only one major tech product instead of several. I’ve never seen any study on this, but in certain markets like North America and UK, I would be willing to bet that Apple products absorb a healthy percentage of a consumers holiday tech budget. Which means that as Apple’s ASPs rise it could impact other categories more heavily during the holiday season.

Interestingly, it isn’t just Apple. I’ve noticed a trend of rising ASPs in general of many tech products and consumer packaged goods. Perhaps companies have learned from Apple that when consumers find something, they value they are willing to pay more for better products and services.

While this may seem to go against conventional wisdom, I’ve long noted this consumer mindset dynamic as a function of mature markets. While commoditization, or commodity prices, play an important role in driving a product or service into the mainstream as mature consumer mindsets set in they rarely keep looking for the cheapest thing around.

With mature products like smartphones, PCs, tablets to a degree (perhaps a different story here), TVs, etc., rising in ASPs, it means the leftover tech budget will have to go to smaller, less expensive gadgets or needs.

Commodity Tech
While there will be room for commoditized tech purchases, some products that are there now may not stay there. Smart speakers are a good example. They are a good example of commodity pricing helping drive the product into the mainstream. Last holiday the vast majority of smart speakers sold were under $100, and their peak regarding weekly sales was when prices dropped below $40 when promotions set into place. Whether smart speakers maintain commodity pricing is a question, but for now, they fit the bill of a less expensive gadget with leftover tech budget.

In an era of rising ASPs pricing products at near commodity prices seems like a key strategy but it is also one only a handful of companies can do. Amazon and their flood of new Echo/Alexa products seems to be positioned to do this and Google’s smart home product strategy may as well to a degree. But this is not a battleground for tech companies with a singular product business model like Fitbit, GoPro, or others we have talked about. Which means upstart consumer hardware companies have a hard and long road in front of them and present a great deal of more risks than rewards from a business standpoint.

Overall Impact
The concern I’ve heard from retailers is that overall tech spending may be down if ASPs keep rising. The worry is that consumers buy less overall as they spend more on one or two things. While it is true consumers often buy a few big ticket items, they typically also buy many little things with additional tech budget. Retailers understand this as their strategy is to get consumers in the door with the big ticket items then get them to buy a lot of little things.

Their worry is all they will spend is the big ticket items (often the things retailers make the least margin on) and don’t buy the accessories, or cables, or other smaller items where the retailers get better margins.

From a dollar standpoint, it may look as though overall consumer electronics spending is steady, but I’m not sure the trend of rising ASPs benefits the retailers as much in this equation.

With all the talk of the death of physical retail being imminent, some of these dynamics will keep adding new challenges for retailers, and it will be interesting to see how they respond.

I’m looking forward to seeing what happens this holiday and we will see if it plays out how I think.

Microsoft and Partners Evolve the Modern Enterprise Desktop

The concept of a business PC and the digital environment it enabled used to be pretty easy to understand. You bought a PC from a vendor like Dell, HP, or Lenovo, installed a copy of Windows and a copy of Microsoft Office on it, connected it to the network, and you were basically done.

Today, however, there is an enormous array of different solutions to bring that same type of functionality to life for the modern business worker. From different types of buying, leasing, and hardware “as a service” business models for physically acquiring the PC, to several means of delivering the Windows desktop to a given device, to multiple choices of productivity suites and means of interacting with documents created with those tools, the modern enterprise desktop has become a surprisingly complex topic.

At the recent Microsoft Ignite event in Orlando, FL, the company provided yet more ways to deliver a digital workspace experience to today’s devices. First, Microsoft joined the growing movement of companies offering physical devices as a service with their Microsoft Managed Desktops solution. Like other device-as-a-service (DaaS) offerings from the big PC vendors, the Microsoft solution provides access to branded hardware products and cloud-based support and device management services for a monthly fee. Initially, the company is offering their own Surface Pros, but eventually they plan to make other PC brands available as well. In addition, Microsoft is providing some important (though likely expensive) differentiation by also bundling Microsoft 365, which includes Windows, Office 365, and integrated security features, as well as ongoing support for all that software, as part of the package.

For those who already have the PCs they need and prefer a virtual desktop offering, the company also launched Windows Virtual Desktop, which includes licenses for Windows 10 and Office 365 bundled together and hosted on the cloud by Microsoft’s Azure cloud computing platform. This is the company’s first VDI (virtual desktop infrastructure) initiative under their own brand, and it delivers a secure and managed Windows desktop to whatever PCs a company has connected. Unfortunately, the industry has fallen into the naming trap of also calling this DaaS, but in this context, it means Desktop-as-a-Service, which is entirely different (though conceptually related) to the other Device-as-a-Service. In both cases, companies are getting access to a Windows desktop for their employees, but through entirely different means. Needless to say, this naming issue is a serious challenge that can trip up IT professionals, vendors, and industry observers who often end up talking in circles before they recognize the confusion.

Despite the naming issues, however, the fact that Microsoft is now offering their own version of a managed Windows desktop via the cloud is actually a big step forward for the company and the whole concept of virtualized computing models. Many companies have large investments not only in lots of Microsoft software, but also service and support relationships, and this new offering will give companies who want to move for moving some of the tedium of desktop management, updating, etc., to a partner like Microsoft.

In the process of developing this solution, the company was careful not to cut out partners, such as Citrix, who have had desktop-as-a-service offerings for some time. In fact, Citrix just announced a new partnership with Microsoft that extends from being a Microsoft certified Cloud Services Provider through a variety of channel and service offerings. As part of it, they will offer a new version of Citrix Workspace offering that builds on Microsoft’s Windows Virtual Desktop offering. In addition, they will be developing a new DaaS solution that leverages Windows Virtual Desktop but also includes Citrix’ extended capabilities for multi-device support, app virtualization, app management, desktop management, and integration with other Citrix tools.

On the Office side of the desktop experience, Microsoft announced a number of important new initiatives that add more contextual intelligence and overall smarts to the applications. While it’s hard to imagine finding new capabilities that don’t already exist in the feature-laden MS Office suite, the all-inclusive search capabilities delivered via Microsoft Search, as well as the AI-driven Ideas functions look to be attractive new add-ons for most users. Integrating enterprise-wide search across Office 365 lets users get access not only to files, but even elements of files, such as charts and data, that can be incorporated into their own documents, which can be a huge help in many organizations. Ideas builds on the concepts found in the AI-powered Designer feature introduced in PowerPoint last year and helps make suggestions not just on elegant designs, but spelling, consistencies, and other details that just make using the applications feel smarter. I’ve been an active user of Designer, and I look forward to how the Ideas instantiation of the concept extends the capabilities even further.
|
In addition, Microsoft has incorporated Ideas into Excel by adding what look to be some very clever new capabilities that can look for patterns and anomalies in data. For those of us who work with large sets of data, these capabilities can be both incredibly powerful and tremendous time savers.

Altogether, the combination of new ways to deliver Windows and Office experiences in easier (and smarter) ways adds up to a surprisingly useful set of additions to what many often perceive as a very slow-moving market. They also highlight how the combination of the cloud and AI are extending their influence well beyond more esoteric applications of the technology into our day-to-day computing experiences. From my perspective, that’s definitely an important step forward for us all.

Here’s a link to the original column:

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Why Amazon Released So Many New Alexa Connected Devices

Last week, Amazon held an event in Seattle to announce that Alexa is now being used on over 70 new devices and launched many new hardware products of their own that support Alexa.

Creative Strategies Principal Analyst and Techpinions columnist, Carolina Milanesi, did a great overview of the actual products Amazon announced at last weeks event for Tech.pinions subscribers and it is clear that Amazon has great faith in the role Alexa can play in their own future.

Amazon’s hardware strategy and its Alexa tie-in is at the heart of one of their most important long-term strategic goals and needs to be factored into these new device announcements.

One of the biggest problems the tech industry has dealt with for decades is that once they sold a piece of hardware, that was the end of the sale. Yes, they could sell things like extra power supplies, monitors, additional memory, printers, etc. but in most cases, these were one time sales with no further revenue attached to them.

The PC makers did try to find a way to gain some continuous revenue by striking deals for loading software from third-party developers and get a small cut from these deals, but this created what was called “bloatware,” and most PC makers had to discontinue this practice. Even today, most PC’s, printer and other peripherals are sales that have no additional or long-term revenue stream attached to them.

Over the last ten years, things have shifted a bit, thanks to Apple and their creation of the iPod and eventually the iPhone that was centered around a hardware, software and services model. Over the last 10-12 years, Apple has helped define the concept of how to sell a piece of hardware and reap continuous revenue through its various services. In this case, the iPhone especially serves as the receptacle for receiving and playing music, activating apps, watching videos, etc. While many of the apps are free, many have monthly fees like Apple Music and others that charge for the apps. This brings a level of aftermarket services to Apple that delivers close to $10 billion in revenue to Apple each quarter.

What is impressive with this is that these are charged services revenue, and almost none come from advertising. Today, Microsoft, Google, and Amazon have followed Apple’s lead and are tying more and more services and, in some cases, add revenues to their business models to make sure that even after they sell you something hardware related, they can still earn money from that sale.

Much of what Amazon introduced last week reflects this approach to hardware and services, and by using Alexa, they are making it much easier to even buy by products and services Amazon offers too. However, one of the more interesting things I was asked by a couple of media folks after these Amazon hardware announcements is that it looks like Amazon is just throwing stuff at the wall and seeing which one sticks.

While it may seem that way, Amazon is using another tried and true industry strategy employed by Intel, Microsoft and other ODM’s that has been used for decades. These folks have a core technology that they want to get into the hands of more people. In Intel’s case, it was their CPU’s. In Microsoft’s case, it was their OS and specific software apps. For both, they needed to broaden the types of devices that could use their products and thus began creating what we in the industry call reference designs.

Although Amazon does believe they can sell all of the new products they have created and earn some revenue from them, their real goal is to push third-party hardware vendors to develop similar products and broaden the base of devices that support Alexa and in turn accelerate the number of products that feed into their overall services model.

An excellent example of this is the Amazon Basic Microwave. Carolina explains this new product in the following manner in her analysis:

“One of the devices that were leaked before the event, the AmazonBasics Microwave, ended up being a little different than anticipated. While Alexa’s smarts are integrated into the Microwave, her voice is not, and the Microwave must be connected to an Echo Device to use Alexa. This little difference changes the way I think about the role this new device has to play in Alexa’s ecosystem.

Rather than being a way to deliver Alexa to those users who might not have wanted to invest in an Echo, no matter how cheap it was, this Microwave represents a way to learn how well established user interfaces and device interactions can be changed by voice. Even though microwaves got smarter over time, the way we interact with them has not changed a great deal over the years. You open the door, put some food into the device and input some numbers to either selecting a meal setting or a cooking time. Now with Alexa, we can say what we are cooking, and Alexa will automatically set up the correct cooking time. Amazon also integrated the Dash functionality to replenish food.”

Amazon could sell many of these but what would benefit them is if many makers of Microwaves also use either the Echo or even do a deal to integrate Alexa directly into the Microwave itself. This concept could be applied to the those making some form of setup box alternative and license Amazon’s new Fire TV Recast product.

The same goes for Echo Auto. Alternatively, Echo Wall Clock. If Amazon sells these branded products that are good for them. However, if they get others to see these as reference designs and come to them to license Alexa technology for their version of these new products introduced last week, Its a more significant win for Amazon.

Amazon has emerged well beyond the online retailer that defined them over the last 20 years. They have joined the ranks of significant players that now offer hardware, software, and services. Also, given their scope and the strategy that also uses reference designs to get many other hardware vendors to back them, their growth opportunities could be extraordinary.

Podcast: Amazon 2018 Product Launch Event

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the enormous variety of new product announcements from the recent Amazon press event and analyzing the implications on smart speakers, smart home, voice-based computing and their competitors.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Key Takeaways and Coolest Items from Last Week’s Mobile Conference

Last week, more than 20,000 people descended on Los Angeles for the Mobile World Congress Americas (MWCA) Congress – the second year of a partnership with the CTIA to stage a signature annual mobile event in the United States. While Apple stole some of the spotlight with its own, Cupertino-based event on September 12, this year’s MWCA was really a preview of some major ‘what’s next’ themes in mobile. Here are a few of my key takeaways, and some of the more interesting new products/services I saw at the show.

The über theme was that this is the “eve of 5G”. All major U.S. operators are expected to launch a version of commercial 5G services in the coming months. Verizon surprised many with the launch of its Fixed Wireless Access (FWA) service, branded Verizon 5G Home. Verizon is taking orders in its four initial launch cities. The service targets 300 Mbps, is priced at $50 per month, with all the equipment free, including an Apple TV or Google Chromecast and free YouTube TV for 3 months. This is exciting because it is the world’s first launch of a mmWave-based service. Verizon is taking on established broadband providers – not launching ‘wireless as a last resort’ Internet, which has been the province of most FWA offerings to date. Though all eyes will be on Verizon here — will the performance of mmWave be reliable, and can Verizon take meaningful share in broadband — bear in mind that this initial launch is little more than a large market trial. It’s based on non-standards based equipment. Initial coverage at will be modest, and the ‘white glove’ installation service won’t scale to a larger deployment. Still, this is the most anticipated new wireless service launch in some time.

Although 5G dominated the headlines at MWCA, I was equally excited about developments on the LTE roadmap, which will bring greater capacity and faster speeds – rivaling initial 5G, in fact. In addition to broader rollout of gigabit LTE, we’re also seeing some significant deployments of LAA by T-Mobile and AT&T. There was also substantive news about CBRS (the 3.5 GHz band), where we’re likely to see some commercial launch of GAA service in the coming months. Successfully combining LAA and CBRS can get an operator over the capacity hump as 5G gets built out, and can also achieve speeds of up to 2 Gbps.  Developments in the 3.5 GHz band will also hasten discussion of mid-band spectrum, and over time could bring some new players into the space.

There was also significant emphasis on edge computing. The narrative from operators, vendors, and even some of the sports/media/entertainment entities that are driving this need was remarkably consistent. Those with fiber and data center assets appear to be well positioned, given the processing going on at the baseband. Immersive entertainment and hyper local ad tech are driving the development of a services layer at the RAN.

I was also pleased to see a more positive outlook from two of the industry’s major network equipment vendors, Ericsson and Nokia. Both announced major 5G contracts, and have taken important steps toward addressing new opportunities in IoT and edge computing, while also rationalizing targeted business segments. They are also investing and hiring substantially in the United States. And this was their show to shine, given the relative absence of Chinese vendors from the North America equipment business. I also came away with the impression that Samsung (network side) and Intel will be much more significant players in 5G than they were in 4G.

One area of disappointment was IoT. Not a lot of detail was shared on network deployments or major customers. It seems like the NB-IoT ecosystem is still maturing, and there has been some shakeout among the LoRaWAN community. We’re still waiting for a larger number of significant deployments in IoT needed to drive greater investments on the network side. There’s a growing number of IoT companies that are treading water, waiting for this market to pop.

At the end of the day, even with the increased media/entertainment focus given the venue of the show, this is still largely an infrastructure event. Some of the most exciting stuff a this show, such as MIMO antennas, won’t exactly grab Wall Street Journal headlines. One standout was Pivotal Commware, which is doing some interesting work on software designed antennas and beam forming technology. It also appears that small cell deployments are accelerating. I was also encouraged to see discussion at a couple of the high-profile regulatory sessions about smoothing the way for small cell approvals. This is critical, given that 200,000-300,000 new sites will be required for 5G (the approximate equivalent of today’s installed base).

Amidst all the focus on data, some of the coolest new products centered around good ‘ol voice and messaging! An exciting company called Orion Labs showed a push-to-talk service on speed, adding a host of messaging functionality, a platform to develop OTT apps and messaging services, and a standalone device called Synch that I could see being quite useful in certain verticals. There was also excellent discussion about rich communication services (RCS), where an increasing number of brands are using the mobile platform for the next generation of customer engagement. It also appears that Google and Facebook are determined to play in a major way here.

On the downside, there was very little in the way of headline grabbing news at the show. There were few significant device-related announcements, perhaps due to Apple’s event and its outsized share of the North America market. But it’s surprising there was so little news about 5G devices — even dongles, pucks, or 5G ready laptops. And not much exciting on the IoT device front, either.

On a side note, I also have to say that downtown L.A. is less than desirable as a conference venue. Overpriced and not very good hotels, poor transport, and a convention center that is far from world class — even though it was built in the early 1990s. The contrast to Barcelona is quite remarkable. Downtown L.A. itself is improved, but still very much a work in progress. The number of homeless and street people was disheartening.

A final takeaway is that this show was really a set up for what is looking to be a landmark year for mobile in 2019: real 5G network and device launches, spectrum auctions, CBRS, NB-IoT, outcome of T-Mobile/Sprint, and likely other M&A on the heels of that. We’re going to know a lot more about 5G’s prospects a year from now, and will have a pretty good gauge on whether FWA can take meaningful share in broadband.

Alexa, What Did Amazon Announce This Week?

I was in Seattle earlier this week for an Amazon event that over the space of an hour covered over 70 new devices, customer features, and developer tools across Alexa, Echo, and Fire TV.  Looking at the announcements overall I thought Amazon was trying to do a few things:

  • Appeal to both early tech adopters and mainstream consumers.
    • The needs of early adopters, who bought into the Alexa ecosystem and the voice-first UI early, are focused more in growing the number of products they own, as well as increasing the number of engagement points they have with them through skills. These are also users who have become highly dependent on Alexa, who now can operate even if the internet is down minimizing any negative feelings or frustration users might feel when they could not operate their smart home without an internet connection.
    • For mainstream consumers, the needs are more centered on ease of use, wide price range and clear value proposition. Frustration Free Setup for both Amazon-branded devices and third-party devices will be a crucial enabler of mainstream adoption. Amazon Smart Plug will be the ambassador of this new easy setup. As Echo devices move more into the mainstream market, it seems that Amazon’s design is becoming more elegant, more about devices blending with the furniture rather than standing out because of the tech inside.
  • Broaden the ecosystem by:
    • Delivering best in class Alexa hardware while lowering the entry barrier for hardware vendors who want to integrate Alexa in-home devices by providing the new Alexa Connect Kit
    • Offering a new developer language Alexa Presentation Language so that developers can deliver an experience that takes advantage of the screen of some devices without sidelining the voice-first UI
    • Providing a more natural way to interact with Alexa’s Skills
    • Introducing Echo Input to add Alexa functionality to ‘dumb’ speakers you might already own
    • Taking advantage of the recent Ring acquisition to already bring to market new devices and services deeply integrated with Alexa

When looking at the hardware that was introduced a few devices stood out for me:

The new Echo Show

As much as I wish Amazon had introduced this new design with the first Echo Show, I strongly believe that bringing to the market a much bigger screen today is safer than it would have been with the first model. I say this, because Amazon learned a lot about a meaningful interaction of visual and voice input and I feel delivering the larger screen experience at first, might have compromised Alexa’s voice interactions.

It also feels like the new Echo Show offers a broader set of visual use cases that bring the screen to life, also thanks to the integrated hub. All delivered without making the Echo Show feel like a tablet on a stand.

Fire TV Recast

Although you might be tempted to see this device riding the cord-cutting trend, it really is not about that, at all. Recast is about reaching out to a segment of users who do not have cable but might have Fire TV already. Giving these users the ability to record live TV and view it on their preferred device, from their TV to an Echo Show, to their smartphone. The lack of monthly subscription and the local storage make Recast quite competitive with other offerings in the market. I would not be surprised to see the popularity of Recast being stronger in Europe than in the US, just because cable TV does not have the same level of penetration.

AmazonBasics Microwave 

One of the devices that was leaked before the event, the AmazonBasics Microwave, ended up being a little different than anticipated. While Alexa’s smarts are integrated into the Microwave, her voice is not and the Microwave must be connected to an Echo Device in order to use Alexa. This little difference really changes the way I think about the role this new device has to play in Alexa’s ecosystem.

Rather than being a way to deliver Alexa to those users who might have not wanted to invest in an Echo, no matter how cheap it was, this Microwave represents a way to learn how well established user interfaces and device interactions can be changed by voice. Even though microwaves got smarter over time, the way we interact with them has not changed a great deal over the years. You open the door, put some food into the device and input some numbers to either selecting a meal setting or a cooking time. Now with Alexa, we can say what we are cooking and Alexa will automatically set up the correct cooking time. Amazon also integrated the Dash functionality to replenish food.

Echo Auto

The most frequent question I got on this device was: why would I use it? This device for your car dashboard allows you to have Alexa in your car thanks to Bluetooth connectivity between your car and your phone. Although Echo Auto can deliver turn by turn navigation through apps such as Waze, Google Maps and Apple Maps, this is not really the reason why you would use it. The why bother really rests on how deep into the Alexa ecosystem you are. Echo Auto is the link between your home and your car. It allows you with a more location-aware Alexa to operate smart home devices or deliver routines. For instance, you might say “Alexa I am around the corner” and Alexa might be able to open your garage door, turn on the porch light and remind you of anything you were supposed to pick from the store. Echo Auto is not about having an assistant in the car, it is about extending your in-home experience with Alexa into the car.

For the moment, Echo Auto is available in the US and by invitation only indicating that Amazon is probably hoping to learn a lot from the initial rollout. Usually, users who are prepared to try devices and experiences so early in the rollout process have a higher degree of tolerance for bugs and kinks something else that I am sure Amazon is hoping to smooth out before general availability.

Echo Wall Clock

I think that Amazon’s choice to give Alexa’s most used skill an actual device is genius. Echo Wall Clock is the least intimidating way to introduce Alexa into a home of those users who are either skeptics or might think of themselves as “scared of technology.”

Alexa Hunches

This is the ability Alexa now has to suggest tasks she can perform. For instance, when you say “Alexa, goodnight” but you forgot to lock your front door she might say “ The door is unlocked, would you like me to lock it for you.” It is interesting to me how good Amazon is in focusing on delivering value without becoming creepy. Alexa could have been programmed to just lock the door for me, but this is likely to scare people off by making me feel less in control. Think for a second how you would feel if Alexa just said “ the door was unlocked and I locked it for you” the result is the same, but this latter example puts Alexa in control in a way that might raise concerns for the user.

 

There were other devices and features announced at the event, all really underlining how relentless Amazon is in its move to control our home. Putting the new devices aside it is really the AI and ML capabilities Alexa is displaying more and more. This and a reasonably pragmatic approach in rolling out new experiences explain why Amazon remains ahead of the game in the connected home and digital assistant market. The biggest question for Amazon remains how easy it will be to roll out to international markets making its lead an international one rather than a US one alone.

 

 

Making Sense of the GeForce RTX launch

This week marks the release of the new series of GeForce RTX graphics cards that bring the NVIDIA Turing architecture to gamers around the globe. I spent some time a few weeks back going over the technological innovations that the Turing GPU offered and how it might change the direction of gaming, and that is worth summarizing again.

At its heart, Turing and GeForce RTX include upgrades to the core functional units of the GPU. Based on a very similar structure to previous generations, Turing will improve performance in traditional and current gaming titles with core tweaks, memory adjustments, and more. Expect something on the order of 1.5x or so. We’ll have more details on that later in September.

The biggest news is the inclusion of dedicated processing units for ray tracing and artificial intelligence. Much like the Volta GPUs that are being utilized in the data center for deep learning applications, Turing includes Tensor Cores that accelerate matrix math functions necessary for deep learning models. New RT cores, a first for NVIDIA in any market, are responsible for improving performance of traversing ray structures to allow real-time ray tracing an order of magnitude faster than current cards.

Reviews of the new GeForce RTX 2080 and GeForce RTX 2080 Ti hit yesterday and the excitement about them is a bit more tepid than we might have expected after a two-year hiatus from flagship gaming card launches. I’d encourage you to check out the write ups from PC Perspective, Gamers Nexus, and Digital Foundry.

The RTX 2080 is indeed in a tough spot, with performance matching that of a GTX 1080 Ti but with a higher price tag. NVIDIA leaned heavily into the benefit of Turing over Pascal in regards to HDR performance in games (using those data points in its own external comparison graphics), but the number of consumers that have or will adopt HDR displays in the next 12 months is low.

The RTX 2080 Ti is clearly the new leader in graphics and gaming performance but it comes with a rising price tag as well. At $1199 for the NVIDIA-built Founders Edition of the card (third party vendors will be selling their own designs still), the RTX 2080 Ti now sells for the same amount as the Titan Xp and $400 more than the GTX 1080 Ti launched at. The cost of high-end gaming is going up, that much is clear.

I do believe that the promise of RTX-features like ray tracing and DLSS (deep learning super sampling) will be a shift in the gaming market. Developers and creative designers have been asking for ray tracing for decades and I have little doubt that they are eager to implement it. And AI is taking over anything and everything in the technology field and gaming will be no different: DLSS is just the first instance of AI-integration for games. It is likely we will find use for AI in games for rendering, animation, non-player character interactions, and more.

But whether or not that “future of gaming” pans out in the next 12-18 months is a question I can’t really answer. NVIDIA is saying, and hoping, that it will, as it gives the GPU giant a huge uplift in performance on RTX-series cards and a competitive advantage over anything in the Radeon line from AMD. But even with a substantial “upcoming support” games list that includes popular titles like Battlefield V, Shadow of the Tomb Raider, and Final Fantasy XV, those of us on the outside looking in can’t be sure and are being asked to bet with our wallets. NVIDIA will need to do more, and push its partners to do more, to prove to us that the RTX 20-series will see a benefit from this new technology sooner rather than later.

When will AMD and Radeon step up to put pressure and add balance back into the market? Early 2019 may be our best bet but the roadmaps from the graphics division there have been sparse since the departure of Raja Koduri. We know AMD is planning to release a 7nm Vega derivative for the AI and enterprise compute markets later this year, but nothing has been solidified for the gaming segment just yet.

In truth, this launch is a result of years of investment in new graphics technologies from NVIDIA. Not just in new features and capabilities but in leadership performance. The GeForce line has been dominating the high-end of the gaming market for at least a couple generations and the price changes you see here are possible due to that competitive landscape. NVIDIA CAN charge more because its cards are BETTER. How much better and how much that’s worth is a debate the community will have for a long time. Much as the consumer market feigns concern over rising ASPs on smartphones like the Apple iPhone and Samsung Galaxy yet still continues to buy in record numbers, NVIDIA is betting that the same is true for flagship-level PC gaming enthusiasts.

Apple’s Ecosystem Advantage

As I step back and look at Apple fall launch event, the big story in my mind is the incredible strength of the Apple ecosystem. This word ecosystem gets used quite a bit and may often be overused, or even attributed to things that are not truly an ecosystem component. Everything Apple has built from hardware, software, services, retail, customer support, and more has the Apple ecosystem as a central component. Apple’s management has often talked about and demonstrated the many ways Apple’s product work together seamlessly. This seems entirely logical, that of course, a company who makes many different products should have them work together, but often this is not the case. I’d argue that Apple not only has the strongest ecosystem but that their ecosystem compounds (get’s better with more devices) better than any of their competition.

The Best Ecosystems Compound
Just so it is clear, I regularly try and live in all the competing ecosystems like Windows, and Android/Chrome. I do this to make sure I see the benefits and differences of each platform offering as a complete whole. In some cases, the effort of living in an ecosystem is not limited to specific hardware, like in the case of Microsoft and Google there are software-centric ecosystem points. However, that does not discount the role hardware plays in an ecosystem.

The reason both Microsoft and Google have started making more hardware to run their software and services is that they understand what Apple knew all along which is that to provide customers with the best experience with your software and services you also need to have more control of the hardware which runs your platform. Partners who build hardware to run Windows and Android are important, but they often have a broader agenda that is not as aligned with Microsoft’s and Google’s. This point is made clear as I try products like Microsoft’s Surface and Google Pixel. Which are the best implementations of both Window’s and Android as well as more seamlessly integrated with Microsoft and Google’s services than any other hardware.

Understanding that the best ecosystem’s compound means that the more you have/use, the better the whole experience gets. Moreover, while this is somewhat true in a Microsoft and Google world when you use partner hardware, it is truer the deeper you go down each company own hardware solutions.

Apple and the Most and Best Hardware Endpoints
Continuing on the observation that first-party hardware is crucial to an ecosystem advantage, what makes Apple’s strategy here so interesting is how many hardware endpoints they make that extend and deepened this ecosystem. And not only do they make a variety of hardware endpoints, in most cases they make the best or among the best hardware of all hardware they compete.

So not only does Apple make the most hardware endpoints to experience and live in their ecosystem, but consumers also understand the options they have in hardware in the categories from Apple are also compelling choices. For example, if a consumer is interested in a smartwatch, the Apple Watch is the market share, customer satisfaction leader, and most talked about smartwatch on the market. If a consumer is interested in a tablet, the iPad is the market share leader, customer satisfaction leader, and again highly talked about among the top in the category. The same is true of iPhone, and Mac.

This is where Apple is unique, and their ecosystem truly stands out. For a consumer who is interested in all these categories, or even just a few, what company provides such a robust hardware lineup that is also viewed as the best or among the best in all their respective categories? From an ecosystem standpoint, Apple makes the strongest case as the ecosystem of choice for consumers who live in the modern multi-device era.

The breadth and depth of not just options, but also quality, followed with an explicit focus on hardware, software, and services which all work better the more devices you have is one of Apple’s strongest ecosystem advantages. Moreover, it is a luxury only they have at this point. Conceivably, only Google has the potential to match Apple’s hardware ecosystem since they can conceptually compete in all the same hardware categories as Apple but it is unclear if they will.

Consumers Understand the Ecosystem Story
The big question that has always existed was if consumers understood this story. For much of the history of personal computers, consumers have only really had one or two personal computers. But as we enter the multi-device personal computer era, consumers are waking up to the story that if they have one Apple personal computer, and are interested in another category of computing product, that getting another Apple hardware product means they will work better together.

From not just our research, but many other research notes I’ve seen, consumers note that Apple’s products working together as a primary reason to consider another piece of hardware from Apple. This is a critical point because it means the ecosystem story is strengthening not just for Apple but in the minds of consumers as well.

All-in-all, this means Apple remains well positioned in the short and long-term. Ecosystems not only compound but their value also gets spread quickly via word of mouth. Which also happens to be the most significant way people find out about new products and services.

Apple’s Neural Engine = Pocket Machine Learning Platform

I had a hunch going into today’s Apple event that the stars of the show would be Apple’s silicon engineering team. The incredible amount of custom silicon engineering that went into making yesterday’s products is worthy of a whole post at some point. For now, I want to focus on the component that may have the most significant impact in future software design which is the neural engine.

Big Leap Year Over Year
It’s first helpful to look at some specific year-over-year changes Apple made with the neural engine. In the A11 Bionic, the neural engine not only took a much smaller part of the overall SoC block, but it was integrated with some other components. It was capable of 600 billion operations per second and was a dual-core design.

The neural engine in the A12 Bionic now has its dedicated block in the SoC and has jumped from two-cores to eight and is now capable of 5 trillion operations per second. While these cores are designed with machine learning in mind, they also play an exciting role in helping to manage how the CPU and the GPUs are also used for machine learning functions. Apple referred to this as the smart computer system. Essentially a machine learning task has three systems that work together to complete the task, the neural engine, the CPU, and the GPU. Each plays a role and is managed by the neural engine.

As impressive as the engineering is with the whole A12 Bionic, where it all comes together is in the software that allows developers to take advantage of all this horsepower. That is where Apple now letting developers use CoreML to make apps we have never experienced before is a big deal.

The Machine Learning Platform
Apple is getting dangerously close to bringing a great deal of science fiction into reality, and the efforts they are doing with machine learning is at the center. In particular, something geeks in the semiconductor industry like to call computer vision.

At the heart of a great deal of science fiction, and the subject of many analysis I have done is the question about what happens when we can give computers eyes. This is front and center in the automotive industry since cars need to be able to see, detect, and react accordingly to all kinds of objects in the road and around them. Google Lens has shown off some interesting examples around this as well where you point your phone at an object, and the software recognizes it and gives you information. This is a new frontier of software development, and up to this point, it has been relegated to highly controlled experiences.

What is exciting is to think about all the new apps developers can now create using the unprecedented power of the A12 Bionic in a smartphone and rich APIs to integrate machine learning into their apps.

If you have not seen it, I encourage you to watch this bit of Apple’s keynote to see the app, but a fantastic demonstration of this technology took place on stage. It was an app called Homecourt that did real-time video analysis of a basketball player and analyzed everything from how many shots he made or missed, to where on the court he made and missed them as a percentage of his shots, and even could analyze his form down to the legs and wrist in order to look for patterns. It was an incredible demonstration with real-world value, yet it is only scratching the surface of what developers can do with a new era of iPhone software with machine learning at the core of their software.

Machine Learning and AI as the New Software Architecture
When it comes to this paradigm change in software it is important to understand that machine learning and AI is not just a feature developers will add but a fundamentally new architecture which will touch every bit of modern-day software. Think of AI/ML being added into software as a new paradigm as the same way multi-touch become a foundation for UI for the modern smartphone. AI/ML is a new foundational architecture enabling a new era of modern software.

I can’t overstate how important semiconductor innovation is to this effort. We have seen it in cloud computing as many Fortune 500 companies are now deploying cloud-based machine learning software thanks to innovations from AMD and NVIDIA. However, the client side processing for machine learning has been well behind the capabilities of the cloud until now. Apple’s has a brought a true machine learning powerhouse and enabled it to be in the pockets of its customer base and opened it up to the largest and most creative developer community of any platform.

We are just scratching the surface of what is possible now and the next 5-7 years of software innovation may be more exciting than the last decade.

Competing With Apple’s Silicon Lead
If you have followed many of the posts I’ve written about the challenges facing the broader semiconductor industry, you know that competing with Apple’s silicon team is becoming increasingly difficult. Not just because it is becoming harder for traditional semiconductor companies to spend the kind of R&D budget they need to meaningfully advance their designs but also because most companies don’t have the luxury of designing a chip that only needs to satisfy the needs of Apple’s products. Apple has a luxury as a semiconductor engineering team to develop, tune, and innovate specialized chips that exist solely to bring new experiences to iPhone customers. This is exceptionally difficult to compete with.

However, the area companies can try with cloud software. Good cloud computing companies, like Google, can conceivably keep some pace with Apple as they move more of their processing power to the cloud and off the device. No company will be able to keep up with Apple in client/device side computing but they can if they can utilize the monster computing power in the cloud. This to me is one of the more interesting battles that will come over the next decade. Apple’s client-side computing prowess vs. the cloud computing software prowess of those looking to compete.

Apple Watch Series 4: A Heart Patient’s Perspective

When an ordinary healthy consumer looks at Apples new Watch Series 4, with its updated health-related sensors and its new ability to do a real-time electrocardiogram, they most likely see a more modern and better model of this watch, but the heart health features are not relative to them. One of the comments I heard from some of the younger journalists at Apple’s launch event was, “this new watch is for older people.”

First, let me address the relativity of this watch to all users, not just older ones who might have heart issues. The significant new features in this Apple Watch are specialized sensors that can detect early signs of AFIB, something that can causes strokes and even heart attacks. Then you have the additional ECG capability that monitors heart rhythms and can give your Dr. a map of your heart rate and look for any irregularities.

When I was younger, I was very athletic and felt invincible. Even in my late 20s, it seemed that I could burn the candle at both ends and stay very active. In my early 30’s I was running 5 miles three times a week. However, during a physical at age 33, my doctor discovered I had high blood pressure. I was in peak health, yet I had high blood pressure. Some of it was hereditary, and part of it was eating habits related. While I could correct the eating issue, it took medications to deal with the genetic problem. So at 33, I became a lifelong heart patient.

Over the years I have seen quite a few friends struggle with heart-related issues even at an early age. One of the top leaders in Tech in the 1980’s had a stroke in his late 20’s, and this has impacted his life ever since then. I have even had a friend who died of a heart attack at age 23. There are so many factors that go into one’s potential of developing heart disease at any age, that starting to monitor this particular health issue even in the younger stage of one’s life has merit. That is why I dismiss the idea that Apple’s Series Watch 4 with its heart health monitoring features are just for old people. I believe it has serious heart tracking health features that should be relative to anyone over 20 years of age.

From my own perspective as a heart patient, this watch is even more relevant to me than the ones in the past. I have two strikes against me. I have been a type 2 diabetic since 1995, and in 2012 I had a triple bypass. Without the Apple Watch, I have been monitoring my health in great detail over these years and use a blood pressure cuff daily. I also use Dexcom’s G6 Continuous Glucose monitor 24/7. This allows me to see my blood sugar readings any time I want to check on them. (I initially used a separate handheld device to look at these readings, but since Dexcom made it work on an Apple Watch, I now glance at my watch to see any current blood sugar measurement)

If a person has been diagnosed with heart disease, this watch could be critical to them. However, even if they haven’t had a heart disease diagnosis, the potential of AFIB in anyone at any age may be worth the expense of having an Apple Watch keep tabs on their heart health.

Apple has had thousands of letters from people who have had the Apple Watch alert them to health issues that got them to mention it to their doctors for immediate treatment. Apple has also had hundreds of letters from people who told them that the Apple Watch has saved their lives.

So, what other health features could Apple Bring to the market in the future? There have been many rumors that Apple could be creating their Continuous Blood Glucose monitoring system. Moreover, it is not too far fetched to believe that Apple could add another unique sensor that worked with an inflatable band that could read blood pressure.

I believe that the Apple Series Watch 4, with its advanced heart health monitoring features, will prove to be even more important to millions of people over the age of 20. This new Watch has much higher value than past models, and I suspect it could become Apple’s best selling model to-date as these new health monitoring features will have a greater appeal to a broader audience around the world.

AI Application Usage Evolving Rapidly

Given the torrid pace of developments in the world of artificial intelligence, or AI, it’s probably not surprising to hear that applications of the technology in the business world are evolving quickly as well. What may catch people off guard, however, is that much of the early work in real-world use is happening in more mundane, back-office style applications like data security and network security, instead of the flashier options like voice UI, as many might expect.

As part of the AI in the Enterprise study recently fielded by TECHnalysis Research (see a previous column called “Survey: Real World AI Deployments Still Limited” for more background and additional information on the survey), over 500 US-based businesses that were actively involved in either developing, piloting, or running AI applications in full production, were asked about the kinds of applications they use in their organizations. Respondents were asked to pick from a list of 15 application types, ranging from image recognition, to spam filtering, to IoT analytics, and more, as well as the maturity level of each of their application efforts, from development to pilot to full production.

As Figure 1 shows below, the top two choices amongst the respondent group were Data Security and Network Security, with roughly 70% of all respondents saying they had some kind of effort in these areas.


Fig. 1

While these are clearly critical tasks for most every organization, it’s interesting to see them at the top of the list, because they’re not the type of applications that are typically seen—or discussed—as being cutting edge AI applications. What the survey data clearly shows, however, is that these core company infrastructure applications are the ones that are first benefitting from AI. Though they may not be as sexy as computer vision and image recognition, ensuring that an organization’s data and its networks are secure from attacks are great ways to leverage the practical intelligence that machine learning and AI can bring to organizations.

As important as the top-level rankings of these applications may be, when you look at the application usage data by maturity level of the implementation, even more dramatic trends appear. Figure 2 lists the top AI applications in full production and, as you can see, virtually all of the highest-ranking applications can be classified more as back-office or infrastructure type programs.


Fig. 2

Spam Filtering applications made it to number two on this list and Device Security rose to number four overall. Again, both of these applications can leverage AI-based learning to provide a strong benefit to organizations that deploy them, but neither of them have the association with human intelligence-type capabilities that so many people expect (and fear) from AI.

When you look at the top applications in pilots, a dramatically different group rises to the top, as you can see in Figure 3. Here’s where we start to see more of the AI applications that I think many people might have thought would have appeared higher on the overall list, such as Business Intelligence, Voice UI/Natural Language Processing, as well as Image Recognition. What the data shows, however, is that many of these more “sci-fi” like applications are simply in much earlier stages of development.


Fig. 3

Following the same kind of trends, the top AI applications still in development, illustrated in Figure 4 below, are focused on an even more distant view of the future, with Robotics at the top of the list followed by Manufacturing Efficiency/Predictive Maintenance and then Call Center/Chatbots. Companies are clearly driving efforts to get these kinds of applications going in their organizations, but the real-world implementations are still a bit further down the road.

Fig. 4

Taking a step back from all the data, it’s interesting to note that there are unique groups of applications at the various maturity levels. Many of those that are high-ranking at one level are much lower-ranked at the next maturity level, suggesting very distinct phases of development across different types of AI applications. It’s particularly interesting to see that the realities of AI usage in the enterprise are fairly different than what much of the AI press coverage has suggested.

By understanding what companies are actually doing in this exciting area, it can help set more realistic expectations for how (and when) various aspects of AI will start to make their impact in the business world.

(Look for more data from this study and a link to summary results in future columns.)

Bringing Back Manufacturing Jobs

If this country wants to bring back high-tech manufacturing jobs it needs to do a lot more than taxing iPhones made in China. President Trumps’ tweet to that effect is far from his worst, but it’s about as ignorant as many we’ve seen. But it’s also an opinion that’s been expressed by others, often with good intentions to bring back manufacturing jobs to this country. And like a broken clock that’s right twice a day, that sentiment is not necessarily wrong. We have lost many high paying manufacturing jobs, and we should look at what it would take to bring them back. Too many of our citizens are underperforming in service jobs and struggling to make a minimum wage. Underemployment is a serious issue.

Having designed and built scores of consumer tech products in this country, beginning in the seventies all the way into the nineties, I’ve seen and participated in bringing more and more products to Asia, and continue to do so. I was instrumental in the shift of building products for Polaroid and Apple from this country to Asia, specifically in Japan, Taiwan, and China.

Our politicians seem to show about as much understanding of this issue as they do of other technologies. They simplify the cause and solution to a few tweets. If they really do want to bring back manufacturing jobs, tariffs are not the solution.

What is the answer? Here’s what I’d tell the politicians to do:

Understand why products are being made in Asia. Spend some time learning why China is such an attractive place to design and build them. Read this classic and timeless article by James Fallows from The Atlantic Monthly, China Makes, the World Takes. You’ll learn that U.S. companies build products there because of talent, speed, infrastructure, and cost. While cost is an important consideration, it’s no longer the primary reason.

The fact that China has become the manufacturer to the world didn’t happen without an immense commitment and foresight. Both national and local governments provided incentives and billions of dollars in investments to create the infrastructure that enabled it to happen. They built industrial parks, highways, bullet trains, libraries, high-speed networks, colleges, hospitals, and airports. They cleared the trees, tilled the fields, planted the seeds, and nurtured the growth that allowed thousands of factories to blossom, skills to be developed and millions of jobs to be created.

During the decades that it was occurring, our government stood by and did nothing. We failed and continue to fail to develop our infrastructure, encourage new development centers, and invest in new technologies. Just one tiny example: the U.S. ranks 28th in the world in mobile internet speeds behind Greece. When there is an initiative, it’s usually boneheaded, such a bringing back coal mines.

And we continue to do nothing. While being the manufacturing center for the electronics industry may have passed us by, we still can do with green technologies what China has done with computers and cell phones. While our governing party denies climate change and even questions science, the Chinese government is fast becoming the world’s center of solar technology and electric cars. By their mandating the move to clean energy to address the environment, they’ve incentivized the building of factories and the manufacturing jobs to build cars, build solar panels, windmills, batteries, and some products yet to be invented. Right before our eyes, they’re repeating what they’ve done with electronics manufacturing and creating new centers of manufacturing for the world, this time for green technologies. Like decades ago, they’re able to see the future and are investing to dominate.

So I’d tell our government if they’re serious about bringing manufacturing jobs back to this country, it’s not going to happen with tariffs or coal mines. But it could happen by looking ahead and seeing where the jobs will be created. Stop denying science, embrace it, support it and invest in the future. That’s the most effective ways to bring back manufacturing jobs to the United States.