Podcast: Apple Product Reviews, Google-HTC, Nest, Amazon Smart Glasses

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the reviews of Apple’s newly lauched products, analyzing Google’s announced acquisition of people from HTC, chatting about the Nest security product announcements, and debating the opportunity for Amazon-branded smart glasses.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

What is the Future of Upgrades?

One of the most appealing aspects of many tech-based products is their ability to be improved after they’ve been purchased. Whether it’s adding new features, making existing functions work better, or even just fixing the inevitable bugs or other glitches that often occur in today’s advanced digital devices, the idea of upgrades is generally very appealing.

With some tech-based products, you can add new hardware—such as plugging a new graphics card into a desktop PC—to update a device. Most upgrades, however, are software-based. Given the software-centric nature of everything from modern cars to smart speakers to, of course, smartphones and other common computing devices, this is by far the most common type of enhancement that our digital gadgets receive.

The range of software upgrades made for devices varies tremendously—from very subtle tweaks that are essentially invisible to most users, through dramatic feature enhancements that enable capabilities that weren’t there before the upgrade. In most cases, however, you don’t see entire new hardware functions being made available through software upgrades. I’m starting to wonder, however, if that concept is going to change.

The event that triggered my thought process was Tesla’s recent decision to temporarily enhance the battery capacity, and therefore driving range, of their Tesla vehicles for owners in Florida who were trying to escape the impact of the recent Hurricane Irma. Now, Tesla has offered software-based hardware upgrades—not only to increase driving range but to turn on their autonomous driving features—for several years.

Nevertheless, it’s not widely known that several differently priced models of Tesla’s cars are identical from a hardware perspective, but differ only in the software loaded into the car. Want the S75 or the S60? There’s an $8,500 price and 41-mile range difference between the two, but the only actual change is nothing more than a software enablement of batteries that exist in both models. Similarly, the company’s AutoPilot feature is $2,500 on a new car, but can be enabled via an over-the-air software update on most other Tesla cars for $3,000 after the purchase.

In the case of the Florida customers, Tesla was clearly trying to do a good thing (though I’m sure many were frustrated that the feature was remotely taken away almost as quickly as it had been remotely enabled), but the practice of software-based hardware upgrades certainly raises some questions. On the one hand, it’s arguably nice to have the ability to “add” these hardware features after the fact (even with the post-purchase $500 fee above what it would have cost “built-in” to a new car), but there is something that doesn’t seem right about intentionally disabling capabilities that are already there.

Clearly, Tesla’s policies haven’t exactly held back enthusiasm for many of their cars, but I do wonder if we’re going to start seeing other companies take a similar approach on less expensive devices as a new way to drive profits.

In the semiconductor industry, the process of “binning”—in which chips of the same design are separated into different “bins” based on their performance and thermal characteristics, and then marketed as having different minimum performance requirements—has been going on for decades. In the case of chips, however, there isn’t a way to upgrade them—except perhaps with overclocking, where you try to run a chip faster than what its minimum stated frequency is—and there’s no guarantee it will work. The nature of the semiconductor manufacturing process simply creates these different thermal and frequency ranges, and vendors have intelligently figured out a way to create different models based on the variations that occur.

In other product categories, however, I wouldn’t be surprised if we start to see more of these software-based hardware upgrades. The benefits of building one hardware platform and then differentiating solely based on software can make economic sense for products that are made in very large quantities. The ability to source identical parts and develop manufacturing processes around a single design can translate into savings for some vendors, even if the component costs are a bit higher than they might otherwise be with a variety of different configurations or designs.

The truth is, it is notoriously challenging for tech hardware businesses to make much money. With few exceptions, the profit margin percentages for tech hardware is in the low single digits, and many companies actually lose money on hardware sales. Most hope to make it up via accessories or other services. As a result, there’s more willingness to experiment with business models, particularly as we see the lifespans for different generations of products continue to shrink.

Ironically, though, after years of charging for software upgrades, we’ve seen most companies start to offer their software upgrades for free. As a result, I think there’s more reticence for consumers and other end users to pay for traditional software-only upgrades. In the case of these software-enabled hardware upgrades, however, we could start to see the pendulum swing back the other way as virtually all of these upgrades have a price associated with them. In the case of Tesla cars, in fact, it’s a very large cost. Some have argued that this is because Tesla sees itself as more of a software company than a hardware one, but I think that’s a difficult concept for many to accept. Plus, for many traditional hardware companies who may want to try this model, the positioning could be even more difficult.

Despite these concerns, I have a feeling that the software-based hardware upgrade is an approach we’re going to see a number of companies try variations on for several years to come. There’s no question that it will continue to come with a reasonable share of controversies (and risks—if the software upgrades become publicly available via frustrated hackers), but I think it’s something we’re going to have to get used to—like it or not.

Apple’s A11 Bionic: The Core of Apple’s Competitive Advantage

I’m not the only one, but there aren’t many folks out there who have been pounding the Apple Silicon strategy drum. There are many fascinating elements strategically to these efforts that many people, companies, Apple competitors, etc., take for granted. I’ve argued before that the Apple silicon efforts are one of the core legs of the stool that help them differentiate and separate their products from the herd. If any component supplier in semiconductors or sensors can not meet their needs or deliver on their vision, they simply design what they need themselves. While I want to dig into the A11 Bionic processor itself and the key parts of the new architecture that are relevant, let’s look at the list of components Apple now designs themselves.

  • CPU
  • GPU
  • Display Controller
  • Image Processor
  • Wifi and Bluetooth modules (in Watch series 3 but expect it to come to other products as well
  • Secure Enclave co-processor
  • Video encoder co-processor
  • Performance processor
  • Neural Engine co-processor

I’m sure there are a few I’m leaving out which they didn’t mention, but the list is growing of Apple designed silicon with nearly every product generation. I do not expect Apple’s silicon team to slow down.

Even if we think beyond their proprietary silicon efforts imagine of the other components, they build into the iPhone that they customize with their component partners. The Samsung OLED display is a custom designed panel Apple helped design, and Samsung manufactured. The glass on the front of and the back is a custom design done in partnership with Corning. The lithium ION battery they use is their proprietary battery recipe they created in conjunction with their battery supplier. The camera lens technology they get from Sony and now LG for the True Depth system, is custom designed. When it comes to manufacturing Apple has some proprietary manufacturing processes they created with Foxconn that are unique and exclusive to the iPhone. You can see where I’m going with this. Apple’s level of vertical integration goes down to the most important details of their products. Never have we seen anything like this in consumer electronics.

It is this level of verticalization and attention to detail that got them to be where they are.

A11 Bionic – Fastest and Smartest Chip in the World
Apple claimed that the A11 Bionic is the fastest and smartest chip in the world. We will talk about what it means to be the fastest and then the smartest, but I want to highlight how Apple is starting to discuss the A11 as a brand. Since the beginning of their silicon efforts, you recall Apple used to only refer to their main iPhone and iPad processor as the A(x) processor. They only recently did they start referring to the A10 as Fusion. Apple is taking a play from the playbook from Intel, AMD, and Qualcomm by designating a brand to their main chipset and the name changes when the architecture design of the chip changes. In doing this, Apple is telling us the underlying design architecture of the Bionic is new and different from the Fusion. Some may disagree with the branding choice, but honestly, Apple is subtly telling the world they consider their efforts in silicon design on par (I’m sure they feel they are better) with the those like Intel, AMD, and Qualcomm whose main business is to design the worlds best chips for computers and sell them to the world. In short, Apple is saying, our chipset designs are as good as these guys, and they are exclusive to Apple products.

Apple’s silicon efforts are unique. Apple can custom tune the chipset architecture to their needs for iOS in ways no other company can. This is why it is somewhat unfair to compare Apple’s chip designs to those of Qualcomm, Intel, or AMD. These companies have to design chipsets in completely different ways to Apple because they serve a larger market and a broad range of customers. They don’t have the luxury to focus a design on just one device or platform. They also have to pass much more stringent regulation and certification processes because they sell to third parties that Apple does not have to pass. So comparing Apple’s chip performance to Intel or Qualcomm is one of those unfair yet somewhat relevant comparisons.

It is this custom tuning of chipset designs to iOS that I find incredibly compelling for Apple. Think about this one point they made regarding the Bionic architecture. This chip now has six cores there are two performance cores, which means ones to do some heavy lifting, and four efficiency cores for smaller more lightweight tasks. The A10 Fusion had two performance cores and two efficiency cores. The two performance cores on the A11 Bionic are 25% faster than the A10, and the four efficiency cores are 70% faster than the A10. Then we get to this nugget: the second generation Apple designed performance controller (which is the controller they designed to determine how best to utilize all these cores together, intelligently, for efficiency and performance) runs multithreaded workloads 70% faster. While the overall core speed bumps sound great, it is this 70% performance gain in multitasking where you will visibly see a difference in how the OS and apps perform on iPhone 8/Plus and X. This feature alone will cause developers to rejoice because it increases what they can do with their software. One of my favorite lines those of us in semiconductor circles like to use is “The one group who you never need to convince to give them more performance is software developers.”

We also witnessed the debut of Apple’s custom and in house GPU design. I’m still unclear how much Imagination IP is used, as I can’t imagine (no pun intended) it is gone but perhaps diminished. Regardless, Phil Schiller made a comment that is significant about this GPU design. He said it was specifically designed to accelerate 3d games, especially those designed with the new Metal 2 framework. This gives us all the insight we need around Apple’s proprietary silicon efforts. They are designing the chips to perform EVEN BETTER, when you use their proprietary developer frameworks like Metal, Swift, and now ARkit, CoreML, etc. This only deepens their engagement with developers and secures developers into their long term future but also will make the apps and software that run on iOS that much more powerful. This tightly integrated strategy only improves developers chances of making more money with their software which keeps them chomping at the bit to create new software for iOS and not other platforms.

In light of the value proposition I just described, the addition of the A11 Bionic Neural Engine makes complete and total sense. Apple has designed a bit of silicon specifically to run their machine learning tasks for new innovations like FaceTracking, Animoji, and third parties like they did with Snapchat. Giving developers access to the A11 Bionic will again expand the possibilities for software developers for greater opportunity and software innovation.

Every thing Apple designs and customizes from a component standpoint is purpose built for a better experience with their hardware. This makes competing very difficult and the experiences with their products noticeably better to the naked eye and normal consumer. This is why their increasing control of the component designs in nearly every aspect of the iPhone is so significant to their competitive advantage and will be for quite some time.

It’s Time for Modern Digital Identities

It used to be so simple.

Essentially, you could verify your identity by providing some kind of unique piece of information that—in theory, at least—only you or other trusted parties would know. Like, for instance, your social security number.

Of course, those days are now gone, and last week’s monumental hack of credit reporting firm Equifax put a thundering exclamation point onto the end of that era. Throw in all the other high-profile hacks into companies like Home Depot, Target, etc. and it’s not too far a stretch to say that not only the social security number, but a great deal of other identifying information on nearly anyone in the US is now readily available. (In fact, paradoxically, the value of that once very important information has likely dropped dramatically.)

Identity verification without being physically in front of someone is still an incredibly important way in which we interact with the world around us, however, so what do we do? The problem is that we don’t really have a clear, universal alternative moving forward.

Yes, there are numerous efforts designed to move away from the more traditional “analog” methods of identity to digital ones, but none of them work across all the environments or interactions in which we find ourselves engaging. Ironically, the notion of moving to very basic forms of digital identity—usernames and passwords—has actually exacerbated today’s identity problem, and by a huge amount.

Today’s digital identities are essentially a horrendous conflagration of good intentions gone wrong, because none of them is truly complete. Part of the reason is that, while moving towards a single digital identity—such as a government sponsored system—offers some clear benefits, it also opens up potential risks as a single, critical point of attack. Lose that one identity, and you could potentially lose everything.

Important steps forward are being taken, however. First, we’ve seen tremendous growth in the use of multi-factor authentication, where you need to provide at least two forms of digital ID to verify your identity. The problem with this is that not all methods of providing a second or third factor, or “form” of digital identity are equally strong, and several have been discovered to be much weaker than initially thought. Texting your temporary or special log-in codes via SMS, for example, has serious limitations that weren’t initially identified.

Second, we are seeing much more use of different types of biometric authentication, which uses physical characteristics of your body to identify you. From fingerprint readers on notebooks and smartphones, to iris scanning, and if rumors about Apple’s new iPhone are to be believed, facial recognition on smartphones, the availability of these generally much more secure methods of ID verification is becoming more widespread. Now, some worry that biometric data, as with a single universal ID, represents a security concern because you can’t “change” your biometric data and if it’s somehow stolen, you have a security challenge. However, biometric data in combination with the requirement for multiple factors of authentication (even, in some cases, multiple forms of biometric identification) is generally considered very secure.

Third, we’re starting to see more efforts to form industry-wide collaborations to help drive the “universality” of these identity concepts. The FIDO Alliance, for example, is working with a variety of major tech, credit card, banking, and other financial services companies to develop a standard that will interoperate across websites, devices, services and more.

In addition, just last week, the four major US carriers—in an extremely rare show of complete unity—announced the development of the Mobile Authentication Taskforce. This group will be responsible for developing a single, consistent method of authentication that both consumers and businesses can use to accurately identify people using mobile devices on any US telecom network. First results won’t be showing up until 2018, but this sounds like an enormously positive development.

The challenges of creating a viable, secure, and modern form of digital identity are extremely difficult, and even in spite of all the positive efforts I’ve listed here, there’s no guarantee we will have a viable option anytime soon. But as the events of the last week have hammered home, it is absolutely time to move past old ideas and embrace the opportunities that a digital identity can enable.

News You might have missed: Week of Sept 8, 2017

Google and Xiaomi partner for new Android One phone

At the start of the week, Google and Xiaomi announced a partnership to bring Android One to India and other markets in the shape of the new Mi A1, a smartphone that is priced at just over $200 and comes with the tagline “created by Xiaomi and powered by Google.” The A1 is very similar to the Mi X5 recently announced in China. It has a 5.5-inch 1080p screen, a metal body, and a dual-camera system that includes a secondary telephoto lens for 2x zooming and portraits with shallow depth of field and runs on Qualcomm Snapdragon 625. As to be expected from the tagline the A1 runs stock Android.

Via engadget

  • This is the second attempt to go into emerging markets with Android One product and the key difference is the price point. The initial play was at a very low price point but consumers just did not seem to be interested in getting an entry level device running on Android
  • For Google, this represents the opportunity to get a higher-end device with a very attractive price point that running on stock Android to drive higher engagement on Google services.
  • For Xiaomi, it means a broader market reach compared to the countries it usually launches in: 40 markets altogether. This is a big opportunity for Xiaomi, enabled by the fact that it did not have to come up with the software and services for each country.
  • The lack of Xiaomi’s usual user experience is the big risk for the vendor that grew a strong and loyal fan base precisely on its differentiated UI more so than hardware
  • In January, rumors had it that Android One was coming to the US market in the summer but so far we have not seen anything. It will be interesting to see who the partner will be for it but clearly, we could expect someone that can deliver aggressively priced higher end devices.

Amazon HQ2

Amazon is looking at opening a second North American HQ and is asking cities to bid for it

Amazon is looking for a location with strong local and regional talent—particularly in software development and related fields—as well as a stable and business-friendly environment to continue hiring and innovating on behalf of our customers. Bezos’s company is expect to invest over $5 billion in construction and grow this second headquarters to include as many as 50,000 high-paying jobs – it will be a full equal to the campus in Seattle. In addition to Amazon’s direct hiring and investment, construction and ongoing operation of Amazon HQ2 is expected to create tens of thousands of additional jobs and tens of billions of dollars in additional investment in the surrounding community.

Via Amazon

 

  • Amazon seems to have outgrown Seattle, a city that grew thanks to Amazon but now is faced with many of the challenges we see here in Silicon Valley: traffic, high cost of real estate and public infrastructure strain
  • Adding a second HQ will provide more flexibility to the new hires which might be a competitive advantage when it comes to hiring the best talent
  • Given Amazon is talking about North America, Canadian options could be considered but Bezos would attract even more criticism from the US President. Alternatively, cities in Illinois, Kentucky, and Ohio could also be an option has they already have been generous with state and local subsidies.
  • Texas might be an option given Wholefoods is headquartered there as well as the fact that Austin is becoming an active tech hub
  • For Seattle, after the recent departure from Boeing, this news is all positive as the city could not have coped with an extra 50k people.

Google rumored to buy HTC

A couple of weeks ago, HTC said it was evaluating options as far as selling its business or spinning off its Vive business. Today, a report from a Taiwanese news outlet called Commercial Times says Google is in the final stages of acquiring all or part of smartphone maker HTC.

Via Mashable

  • The only reason why I think it makes sense for Google to buy HTC is if it thought it could be bought by someone else and therefore unable to continue to make Pixel phones for Google. In a way, this is not dissimilar from Microsoft buying Nokia when it feared it could lose the largest Windows Phone maker they had
  • The interesting part would be if Vive were also on offer as this could give a competitive edge over Facebook and Oculus
  • Of course, this would not be Google’s first rodeo when it comes to buying a smartphone maker. Why is HTC different from Motorola? Because Google would focus on Pixel as a brand, not HTC
  • Some analysts said that HTC could cut its staff in half and try to stay afloat for longer. Right now HTC has more than double the staff of Motorola or Sony, yet a lower sales volume than either of the two.

The Autonomous Car Charade

It’s time to face some challenging realities when it comes to the world of autonomous cars. While consensus seems to imply that the future of driving is nearly upon us, even a relatively cursory check at some of the necessary enablers for truly autonomous automobiles would suggest otherwise.

From security concerns to high costs to missing infrastructure to car design complexity to uncertain legal expectations, and more, there are a host of legitimate concerns that, in some cases, by themselves represent a serious challenge to the near-term release of truly independent vehicles. Taken together, however, they strongly suggest a much longer timeline for adoption than many have been led to believe.

Let’s start with some basics. The general expectation is that autonomy is intrinsically linked to vehicle electrification. The big problem here is that very few consumers are buying or planning to buy electric vehicles. Sure, we can point to the hundreds of thousands of pre-orders for Tesla’s Model 3, but even if they all get delivered over the next two years, they will represent a tiny single digit percentage of total US auto sales.

Throw in all the other electric vehicles from other carmakers and the number still remains well below 5%. Why? In part because US consumers are generally very concerned about getting stranded if the batteries run out. Rightly or wrongly, until we see nearly as many charging stations as we have gas stations, there will be reluctance on the part of car buyers to give up their gas-powered vehicles. (Of course, throw in the fact that there are multiple electric car charging standards and that charging “fill-ups” are measured in tens of minutes—or even hours—and you start to get a sense of the problem.)

We could start to see more interest in electric vehicles as second cars that are used primarily for short errand trips around town, but then we run into pricing concerns because few people want to spend more for a second car than their primary vehicle. Plus, the costs and potential impact on the electric grid as consumers start to install in-garage charging systems—yet another expense associated with electric cars—are potential concerns.

Even if we get past the electric car issues—or if, as I suspect, we start to see more autonomous driving features in hybrid or even gas-powered vehicles—plenty of other obstacles remain.

Foremost among these are security issues—at many levels. First, there is the physical security and safety of both autonomous vehicle occupants and the other people who interact with autonomous vehicles. While it’s clear that great advances in autonomous driving algorithms have been made, it’s also obvious that there are still concerns about how “ready” this technology currently is. The fact that several engineers from Tesla’s AutoPilot program actually went so far as to leave the company, in part because of their concerns about the potential safety concerns of current implementations, speaks volumes about the current state of affairs in autonomous driving systems.

Beyond physical safety are the cybersecurity concerns. As has been discussed by many before, there are enormous potential threats that are opened when the connectivity necessary to build and run autonomous cars is put into place. The notion of hacking when it comes to automobiles moves from an annoyance to a life-threatening concern.

Many companies are currently doing excellent work to try to combat or prevent these kinds of issues. However, their work is made significantly more difficult by the fact that modern car designs and internal architectures are both extraordinarily complex—“Rube Goldberg”-like is not far from the truth—and, in some instances, based on old, limited standards that were never intended to support today’s computing and connectivity requirements.

The recent discovery that the CANbus (which is an absolutely essential part of how a car’s various systems components are linked together) is fundamentally broken when it comes to preventing some modern types of digital threats, for example, is just the latest in the long line of concerns about current car architectures. The truth is, we’re way overdue for an entirely new approach to car design—especially for autonomous cars—but the auto industry’s supply chain, infrastructure, and entire way of working is stacked strongly against these kinds of necessary major changes happening anytime soon.

Even if we’re much more optimistic about the technology work being done within the cars, there are yet other external factors that will continue to act as an impediment to near-term deployment. For example, one of the key technologies expected to enable full autonomy is the ability for cars to communicate with each other and other elements of the transportation infrastructure (stoplights, road signs, etc.), commonly referred to as V2V (vehicle-to-vehicle) and V2I (vehicle to infrastructure). The problem is, even though the US auto industry agreed about 15 years ago to use a technology called DSRC (Dedicated Short Range Communications), there are essentially no major deployments of the technology, and now there are strong efforts to switch to a more modern standard based on the kinds of technologies expected to be part of 5G cellular networks. It’s going to be a long, and likely messy, battle to get this figured out and to get the infrastructure built before any cars can start to really use it.

Finally, there are also concerns about regulatory standards, insurance liability, and other legal issues that could dramatically slow down deployments even if all the aforementioned technical, security, infrastructure, and other issues do get resolved.

The bottom line is that it’s hard to imagine widespread availability and usage of autonomous cars for a very long time to come. Having said that, I believe there are enormous benefits around “assisted driving” features that are much more likely to have a very strong and very positive near-term impact. From automatic braking to more advanced cruise control, there are some great new technologies coming soon to cars that will both help save lives and make our driving experiences more pleasant and more convenient.

In addition, I believe we will see real deployments of autonomy in the near future for applications like fleet driving of large cargo vehicles on interstates and other places where the return on investment is much clearer and the risks are a bit lower. Even still, those applications will likely not become commonplace until well into the next decade.

For those predicting radical changes in how consumer-purchased cars and trucks are built, bought, and used over the next few years, however, it’s time to stop the charade.

News You might have missed: Week of Sept 1st

Samsung Updates Its Wearables Lineup

On the first big day for the European Consumer Electronic Show IFA, Samsung launched the new Samsung Gear Fit 2 Pro, the Samsung Gear Sport and the Samsung Icon X 2018. Both the Gear Sport and the Gear Fit 2 Pro offer water resistance and swim tracking, auto-activity detection, and personalized motivation. Both devices provide access to Under Armour’s fitness apps including Under Armour Record™, MyFitnessPal®, MapMyRun® and Endomondo™2 for activity, nutrition, sleep and fitness tracking functions.

Samsung also announced a new Smart Frame TV, a 49” gaming monitor, the AddWash washing machine and the PowerStick Pro Vacuum cleaner

Via Samsung Newsroom 

  • Interesting partnership with Speedo on the Gear Fit 2 Pro. The Speedo On app provides lap counts, time and stroke type. Very similar to what AppleWatch does. This is a Samsung’s spin to the Apple Nike partnership. Targeting the second most popular fitness activity for fitness enthusiasts, Samsung is hoping to provide a differentiator with the help of a very well respected brand. The Gear Fit 2 Pro could be a good upgrade for current Gear Fit owners as well as Fitbit owners.
  • In the new IDC wearable market share numbers released on Thursday, Samsung dropped out of the top five. With the revamped portfolio I would expect Samsung to regain its place but moving to the top remains challenging as Xiaomi owns the low end of the market and Apple the high-end one.
  • The updated Icon X wireless earbuds should do better than their first gen. When they first came to market, consumers were still questioning the wireless earbuds segment but Apple AirPods have proven very popular and scored very high in satisfaction. This higher degree of awareness is something Samsung can harness as well as some AirPods’ envy for Samsung Galaxy users. The bad news for Samsung is that Sony and Bang & Olufsen both launched new wireless earbuds at IFA: the WF-1000x and the B&O Beoplay E8. So this segment is getting crowded fast.

 

Nest has a New, Cheaper Thermostat: the Thermostat E

Nest added a new thermostat to its line, not an upgrade to the current one but rather a brand new cheaper model called Thermostat E. What is different? Mainly the look. The Thermostat E is meant to blend in with your walls and disappear so it comes with a white plastic ring instead of an aluminum one and with a white frosted screen. As far as capabilities, users are not sacrificing anything. This is still a learning thermostat that works with Alexa and Google Assistant. The only feature that the Thermostat E won’t have that Nest’s higher-end thermostat has is a feature called “farsight,” which lets the thermostat tell when you’re across the room and then turn on its display to show you the time or temperature. Compatibility with higher-end heating and cooling systems is not guaranteed as Nest says Thermostat E is compatible with 85%, rather than 95% of homes. At $169, the Thermostat E is certainly an interesting buy if you can get over its looks.

Via Wired 

  • With Google, you always know mass-market is the goal so this should not be a surprise. At $169 compared to the $249 of the aluminum frame model, the Nest E will sure temp a few people who have been sitting on the fence due to price. After all, the original Nest has rarely seen large discounts.
  • Interestingly as I was looking at nest.com I found that PG&E is offering a $50 rebate on all smart thermostats bought and installed between June and December. This would bring the price down to almost the sweet spot of tech impulse buy $99.
  • In our connected home research at Creative Strategies, we found that thermostats and, Nest, in particular, were the first devices consumers experimenting with a smart home would try, followed by lights. At the start of the market many Nest buyers were iPhone users but as Nest continued not to support HomeKit, Apple users might be looking elsewhere which might just mean that Google does not enter certain homes at all.

Samsung gets approved by California to test Self-driving Cars

According to the DMV list,  Samsung has joined a bunch of other companies from the car and tech world approved for testing self-driving cars in California. Back in May, Samsung had the approval to undergo testing in its own market of South Korea.

Via The Verge 

  • This should not come as a surprise given Samsung’s acquisition of Harman Kardon. Just  this week at their IFA press conference the SVP of Technology at Harman highlighted the strong presence of the brand in connected cars
  • As we have seen in other cases, though, just because a company has been granted permission to test a self-driving car it does not mean it will make one
  • Samsung has been quite clear from the get go of the Harman acquisition that the focus is on taking advantage of what is on the roads today. In other words, there are different steps between aided-driving and self-driving cars that Samsung believes capable of playing a role as a supplier of components, software and more.

Amazon adds Parental Control and Kids Skills for Echo

Amazon announced a bunch of kid-friendly activities and games that are available right now as part of the new push. Kids can check out The SpongeBob ChallengeSesame Street, and Amazon Storytime as well as games like The Tickle Monster Game! and Zoo Walk on their own Echo. Amazon also says that hundreds of developers are interested in making kid skills, with more on the way. With the added skills, Amazon also added parental control. The first time you ask Alexa to enable a skill that’s been identified for kids, it will ask you to give the skill permission in the Alexa app. You’ll need to verify your identity with a one-time text code sent to the phone number in your Amazon account or with the security code of the credit card on file. You’ll be able to manage permissions on Amazon.com thereafter.

Via Engadget  

  • Echo Dot and its very aggressive pricing have helped Amazon increase penetration in the home. According to our research at Creative Strategies, Echo Dots do indeed end up in kids’ bedrooms mainly as a low-end speaker.
  • Kids games have been available on Echo since the very beginning but what Amazon is doing here is actually rubber stamping skills that are aimed at children.
  • Having gone through the process of adding a kids skill I can say it is very straightforward and only takes a couple of minutes. While it might not seem necessary today we know from phones and tablets that some kind of control with apps is needed and skills are no different
  • With some focus on detoxing from screens, I can see how Alexa could become the new glorified babysitter representing a sizable opportunity for developers.
  • Alexa has been able to read certain Kindle books and we have used her for bedtime stories before but the new skill makes Alexa much less robotic improving the experience. Although Amazon is starting with using Amazon Rapids for some of the stories, I could see a new Audible offering targeting kids through Alexa.

The Golden Era of Notebooks

As we head towards the end of summer, when kids go back to school and many happy vacationers reluctantly return to their workplaces, it’s common to think about the potential for new devices to help with renewed educational and vocational efforts.

Back-to-school is a particularly important time for notebook PCs, as many vendors introduce new models to meet the seasonal boost in demand that hits this time each year. The great news this year is that it’s hard to go wrong with the options being made available. Thanks to some critical new technology announcements, advancements in some key standards, and most importantly, improvements in the physical designs of modern notebooks, there is a wealth of great options from which to choose.

In fact, after years of hype and, frankly, some unfortunate cases of overpromising and underdelivering, we’re finally starting to get the super sleek and ultrathin, yet very powerful and flexible laptops we were promised a long time ago. To put it bluntly, the Windows PC industry has finally caught up to and arguably surpassed what Apple first started with the Macbook Air about 9 years ago.

Pick up the latest offerings from Dell, HP, Lenovo, Acer or any other major Windows PC vendor, compare it to the notebook you currently own or use for work, and the difference will likely be dramatic. Today’s laptops are lighter, offer longer battery life, and nearly 1/3 feature flexible designs. Some have bendable hinges that enable switching from a traditional clamshell format with the keyboard down below the screen to a tablet-style mode, with a touchscreen interface. Others feature detachable keyboards, most notably Microsoft’s growing range of Surface devices.

Beyond the more obvious physical design enhancements, these new laptops also startup, boot applications, and run much faster than their predecessors. This performance boost is primarily due to some important “under-the-hood” improvements in the chips powering today’s notebooks. Last week, for example, Intel just announced the eighth generation of their Core line of CPUs, the Core i3, i5 and i7, which offer up to a 40% boost in performance versus even last year’s models on some applications (though not on everything).

A good portion of this boost is due to Intel increasing the number of independent computing cores inside the CPU. Because people do more multitasking and keep multiple applications open and running on their computers these days, as well as the nature of how modern software is being written, these extra cores can make an important difference in real-world performance.

In fact, Intel’s main competitor in the CPU market, AMD, used this design concept in both their Ryzen and Threadripper desktop CPUs—introduced earlier this year—with great effect. Thanks to these changes, AMD is finally starting to compete and, in some instances, beat Intel in desktop CPUs. AMD will be bringing these advancements to the mobile market in 2018. Best of all, though, it’s brought a greatly renewed sense of competition back to the market, and that will make both companies’ chips faster and the notebooks using these new designs even better, which is good news for all of us.

The semiconductor improvements in PCs aren’t just limited to CPUs. Nvidia and AMD continue to drive the mobile PC gaming market forward with the dedicated GPUs. Nvidia just unveiled a new thin design they call MaxQ that allows even their high-end GeForce GTX1080 chip to fit inside a comparatively thin 18mm notebook, a huge improvement over many of the current gaming notebooks.

As with CPUs, AMD also just made a strong new entry on the desktop side with their new Vega architecture chips, formally introduced earlier this month, and they will bring Vega to notebooks in 2018.

But you may not even have to wait until then, because the final key new advancement in today’s notebooks is a relatively new connection standard called Thunderbolt 3.0. Found primarily on more expensive notebooks right now, Thunderbolt 3.0 uses the USB Type C physical connector, but supercharges it with the ability not only to connect up to two 4K displays, but also power connections for the notebook, storage devices that can work as fast as internal hard drives and, most interesting of all, the potential to connect desktop graphics cards to a thin notebook. Now, you will need a relatively large, separately powered adapter housing for the card, but the ability to connect and even potentially upgrade desktop-quality graphics to a notebook PC is a capability that’s never been widely available before.

Put all these elements together and it’s clear that we really are in a golden era for laptop PCs. Small, lightweight designs, fast performance, tremendous expandability, and improved flexibility are enabling some of the most compelling new notebook designs we’ve ever seen. Throw in the fact that many new notebooks will be more than capable of driving the new mixed reality VR headsets that Microsoft and its PC partners just announced this week and the outlook appears even brighter. Plus, this vigorous new competitive environment is providing a desperately needed revived spirit for the PC industry overall, and promises even more improvements for the future.

Why Tech Companies Keep Buying Sports Rights No-One Cares About

This past week saw both Twitter and Facebook sign deals with recently-launched sports network Stadium for live coverage of various college sports. Both companies had earlier signed TV streaming deals for a number of other bits and pieces of sports content, none of them particularly compelling. In a world where sports content is one of the few slices of live TV still holding up reasonably well as viewing shifts to on-demand and streaming, why aren’t these companies buying more interesting stuff? The answer lies largely in the long-term deals signed by the major sports leagues in the US.

Recent Sports Rights Deals for Tech Companies Are Mostly Sub-Par

Twitter and Facebook’s Stadium deals are far from the only ones they or other tech companies have signed over the last couple of years. This year in particular has seen a big increase in investment by these companies as they look to fill their rosters of live video content and Twitter in particular tries to deliver on its commitment to have 24/7 live streaming video on its site. Some other examples include:

  • Facebook signed a deal with Major League Soccer and Spanish-language broadcaster Univision, another deal with the NBA’s D-League (the minor league of the NBA), showed pre-Olympic basketball exhibition games in summer 2016, will show 20 live MLB baseball games in the 2016/17 season on Friday nights, and secured rights for a number of second-tier European soccer tournament games for the 2017-2018 season, as well as over 5000 hours of e-sports content.
  • Twitter won the 2016-2017 deal to stream Thursday night NFL games online (a deal lost to Amazon this year), but other than that has mostly had content of less interest to most viewers, with few exceptions: the Wimbledon tennis tournament was another highlight, but it has also signed deals with the WNBA, the PGA Tour, and various other sports including lacrosse.

To be clear, there is an audience for each of these sporting events that these companies are carrying, but the vast majority of it sits outside of the sports that actually drive live viewing in meaningful numbers. Facebook’s live video metrics have all been vague and relative, so we have no way of measuring its success in absolute terms, but Twitter has provided the number of unique live video viewers each quarter, as shown in the chart below:

As you can see, the number of live video viewers has grown over time, but in Q2 it was just 17% of its total monthly active users, meaning that over 80% of its monthly active users never even watched any live video at all. Of that 17%, it’s entirely possible that many simply watched a few seconds, so the number that actively engaged and watched any meaningful amount of live video is likely far smaller still.

The Best Rights are All Locked Up for Years

Why, then, do Facebook and Twitter bother with these sub-par sports rights that drive little viewing? The simple answer is that the rights that might actually drive meaningful engagement are almost all locked up for years. The table below shows a summary view of the US TV rights for the four major sports leagues in the US, some of which are sliced and diced in many different ways, with the rest allocated more simply:

As you can see, Major League Baseball, the National Hockey League, and the NBA are all locked up until at least 2021, with most of the NFL rights packages also locked up for almost as long. The NBA won’t be available to new bidders until the 2025 season. This is why the tech companies – notably Facebook and Twitter – have been acquiring so many other rights: because they’re the only ones available. No matter how much these companies are willing to spend, they simply can’t get significant access to the major sports people actually watch in the US.

The one exception to all this has been the NFL’s Thursday Night Football package, which has had separate broadcast and digital rights deals for the last couple of years. Twitter won that deal for the 2016/17 season, but lost it this year to Amazon, which is likely to be another big bidder for sports rights in the next few years. Verizon, meanwhile, has the unique mobile rights to NFL games, which detracts from every other digital football package out there, but that deal will expire in 2018, so it will be interesting to see what happens then.

Two More Years of Dealing in Marginal Sports Content

All of this means we likely have two more years of tech companies mostly dealing in the same marginal sports content we’ve seen so far from them, grabbing a few games here and there from the major leagues, and then securing broader rights to sports of less interest to the mainstream US user. But a couple of years from now, as some of the rights negotiations for big deals starting with the 2021 season begin, I would expect a number of big tech companies, including not just Facebook and Twitter but also Amazon, Google, and Apple, to be major bidders and likely secure some big packages which have hitherto all been captured by broadcasters and cable operators.

In the meantime, the main focus for most of these companies will have to be on video content other than sports, meaning commissioning both scripted and unscripted shows or acquiring those being shopped around, and the investment in original content in particular will continue to grow, with Apple apparently spending a billion dollars this year, Netflix spending $7 billion, and others like HBO, Hulu, Amazon, and others spending somewhere in-between those figures. For now, that’s the only thing these companies can do, and it’s going to mean that the prices of content go up and there’s eventually a glut of video on the market, until such a point as the legacy TV industry starts reducing its spend in the face of accelerating cord cutting and cord shaving.

Podcast: Samsung Note 8, Smartphones, Apple Auto Plans

This week’s Tech.pinions podcast features Carolina Milanesi, Jan Dawson and Bob O’Donnell discussing Samsung’s Note 8 launch event, analyzing the evolution of smartphones overall, and debating the importance of Apple’s future auto-related plans.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Evolution of Smart Speakers

For a relatively nascent product category, smart speakers like Amazon Echo and Google Home are already seeing a huge influx of attention from both consumers and potential competitors eager to enter the market. Apple has announced the HomePod and numerous other vendors have either unveiled or are heavily rumored to be working on versions of their own.

Harman Kardon (in conjunction with Microsoft), GE Lighting and Lenovo have announced products in the US, while Alibaba, Xiaomi and JD.com, among others, have said they will be bringing products out in China. In addition, Facebook is rumored to be building a screen-equipped smart speaker called Gizmo.

One obvious question after hearing about all the new entrants is, how can they all survive? The short answer, of course, is they won’t. Nevertheless, expect to see a lot of jockeying, marketing and positioning over the next year or two because it’s still very early days in the world of AI-powered and personal assistant-driven smart speakers.

Yes, Amazon has built an impressive and commanding presence with the Echo line, but there are many limitations to Echos and all current smart speakers that frustrate existing users. Thankfully, technology improvements are coming that will enable competitors to differentiate themselves from others in ways which reduce the frustration and increase the satisfaction that consumers have with smart speakers.

Part of the work involves the overall architecture of the devices and how they interact with cloud-based services. For example, one of the critical capabilities that many users want is the ability to accurately recognize different individuals that speak to the device, so that responses can be customized for different members of a household. To achieve this as quickly and accurately as possible, it doesn’t make sense to try and send the audio signal to the cloud and then wait for the response. Even with superfast network connections, the inevitable delays make interactions with the device feel somewhat awkward.

The same problem exists when you try to move beyond the simple single query requests that most people are making to their smart speakers today. (Alexa, play music by horn bands or Alexa, what is the capital of Iceland?) In order to have naturally flowing, multi-question or multi-statement conversations, the delays (or latency) have to be dramatically reduced.

The obvious answer to the problem is to do more of the recognition and response work locally on the device and not rely on a cloud-based network connection to do so. In fact, this is a great example of the larger trend of edge computing, where we are seeing devices or applications that use to rely solely on big data centers in the cloud start to do more of the computational work on their own.

That’s part of the reason you’re starting to see companies like Qualcomm and Intel, among others, develop chips that are designed to enable more powerful local computing work on devices like smart speakers. The ability to learn and then recognize different individuals, for example, is something that the DSP (digital signal processor) component of new chips from these vendors can do.

Another technological challenge facing current generation products is recognition accuracy. Everyone who has used a smart speaker or digital assistant on other device has had the experience of not being understood. Sometimes that’s due to how the question or command is phrased, but it’s often due to background noises, accents, intonation or other factors that essentially end up providing an imperfect audio signal to the cloud-based recognition engine. Again, more local audio signal processing can often improve the audio signal to be sent, thereby enhancing overall recognition.

Going further, most of the AI-based learning algorithms used to recognize and accurately respond to speech will likely need to be run in very large, compute-intensive cloud data centers. However, the idea of being able to start do pattern recognition of common phrases (a form of inferencing—the second key aspect of machine learning and AI) locally with the right kind of computing engines and hardware architectures is becoming increasingly possible. It may be a long time before all that kind of work can be done within smart speakers and other edge devices, but even doing some speech recognition on the device should enable higher accuracy and longer conversations. In short, a much better user experience.

As new entrants try to differentiate their products in an increasingly crowded space, the ability to offer some key tech-based improvements is going to be essential. Clearly there’s a great deal of momentum behind the smart speaker phenomenon, but it’s going to take these kind performance improvements to move them beyond idle curiosities and into truly useful, everyday kinds of tools.

The Myth of General Purpose Wearables

Understanding one’s true role and purpose is one of life’s greatest challenges. But it’s not supposed to be that way for devices. If they are to be successful, tech gadgets need to have a clear purpose, function, and set of capabilities that people can easily understand and appreciate. If not, well…there is a large and growing bin of technological castoffs.

Part of the reason that the wearable market hasn’t lived up to its early expectations is directly related to this existential crisis. Even now, several years after their debut, it’s still hard for most people to figure out exactly what these devices are, and for which uses they’re best suited.

Of course, wearables are far from a true failure. The Apple Watch, for example, has fared reasonably well. In fact, revenues from the Apple Watch turned the tech juggernaut into one of the top two highest grossing watchmakers in the world—though I’m starting to think that says a lot more about the watch industry than it necessarily does about smartwatches or wearables in general.

The problem is that we were led to believe that wearables—particularly smartwatches like the Apple Watch—were going to be general purpose computing and communication devices capable of all kinds of different applications. Clearly, that has not happened, though some seem to hold out hope that the possibility still exists.

Those hopes were particularly strong over the last few days with rumors about both a potential LTE modem-equipped version of the Apple Watch coming this fall and a potential deal between Apple and CIGNA to provide Apple Watches to their health insurance customers. Some have even argued that an LTE-equipped Apple Watch is a game-changer that can bring dramatic new life to the smartwatch and overall wearable industry.

The argument essentially is that by finally freeing a smartwatch from the tyranny of its smartphone connection, the smartwatch can finally evolve into the general-purpose tool it was always intended to be. Applications that depend on a network connection can run on their own, duplicative efforts on the watch and the phone can be eliminated, and who knows, maybe we can finally get the Dick Tracy videophone watch we’ve always dreamt of.

Color me skeptical. Sure, it would be nice to be able to, say, use Spotify or other streaming apps to get dynamic playlists as you exercise, or get texts and other phone-related notifications while you’re away from your phone. Industry-changing and market moving, however, it is not—especially when you factor in the additional costs for both the modem and the service plan you’re going to have to sign up for as well.

Plus, let’s not forget that several vendors (notably Samsung and LG) have already released modem-equipped smartwatches, and they haven’t exactly stormed up the device sales charts. This is due, in part, to the same basic physics challenge that Apple will also have to face: add a modem to a device and it will reduce battery life. Given that many people are frustrated with the battery life on their existing smartwatches, having to dramatically (or even minimally) increase the size of the device in order to accommodate a larger battery, seems like a strong challenge—even for the device wizards at Apple.

The potential of crafting a more healthcare friendly smartwatch, on the other hand, seems much more appealing to me and the alleged tie-up with CIGNA could be a very interesting move. Apple was rumored to have some very sophisticated sensors in the works when the Apple Watch was first announced—such as a non-invasive blood glucose monitoring component, and a pulse oximeter—and with every new release there’s increased expectations for those components to finally arrive. If (or when) they do, the healthcare benefits could prove to be significant for people who choose to use the device. Of course, the need to report all that data back to your insurance company on a regular basis—as a connection with a healthcare company certainly implies—will undoubtedly raise a number of privacy and security-related concerns as well.

Even if those new sensors do appear on the next generation Apple Watch, however, they will further cement the growing sentiment that wearables are actually specialty-purpose devices that are really optimized for a few specific tasks. Not that that’s a bad thing—it’s just a different reality than many people envisioned.

In the end, though, dispelling the myth that wearables can or should be general purpose devices could, ironically, be the very thing that helps them finally reach the wider audience that many originally thought they could.

Three Insights from The US Wireless Market in Q2 2017

One of the markets I track closely is the US wireless industry, and especially the five largest providers: AT&T, Sprint, T-Mobile, TracFone, and Verizon Wireless. All of these companies recently reported their financial results for Q2 2017, and as a result we now have a good picture of what happened in the quarter. Here are three key insights from those results.

Smartphone Growth Continues to Slow

This trend has been in evidence for a while, but continued in Q2 2017: smartphone growth continues to slow significantly. The chart below shows year on year growth in the postpaid smartphone base as reported by the major carriers. Whereas in 2014 the industry added nearly 19 million new postpaid smartphone customers, in the past four quarters, the same carriers added just half that, at 8.5 million. That’s an inevitable result of the increased penetration of smartphones, which now averages around 92% across the four carriers’ postpaid bases.

Worse still, the postpaid device upgrade grade also continues to fall for most of the carriers, though all but AT&T saw a little upward blip in Q2 itself. Overall, though, the average in Q2 2017 was 5.9% of the base upgrading, compared to 7.5% two years ago. That means people are holding onto phones longer as devices last better and as installment plans incentive customers to keep devices after they’re paid off. All of this, in turn, means that smartphone sales are falling – postpaid smartphone sales this quarter were likely around 2 million fewer than a year ago.

Cellular Tablets are in Decline

What’s more, one of the big categories of retail connected devices outside of phones, tablets, is also in decline following years of growth. That decline was precipitated by moves two to three years ago to sell heavily discounted or even free tablets with 2-year service contracts, especially at Verizon and Sprint. As those customers have reached the two-year anniversary of their original purchase, many of them have been churning off the associated service plans, and in the last two quarters the four big carriers as a whole have lost subscribers, as shown in the chart below.

That’s a blow to the carriers at a time when they need more than ever to find new sources of postpaid growth as the phone market slows down and smartphone penetration reaches saturation levels. Smartwatches were thought by some carriers to be a promising new opportunity when they debuted, but in practice few connected devices have been sold so far. If Apple introduces an LTE-enabled Watch next month, as has been reported recently, that could change things, though it will still likely sell in the hundreds of thousands rather than the millions.

Customers are Sticking With Their Provider Longer

Other than the elevated churn levels being seen in tablets at present, the industry is actually doing a better job holding onto its subscribers. Three of the four carriers saw very low postpaid phone churn in Q2 2017, with Sprint the only exception, as shown in the chart below.

Verizon, which has always had the most loyal customers overall, generally hasn’t reported precise numbers for its postpaid phone churn, but did so in Q2 and was below even AT&T’s record low number. But T-Mobile has also seen significant improvements over time, and got a nice boost over the past few quarters from selling some of its low-end subscribers to Wal-Mart, improving the average churn of those that remained. Percentage rates may not mean much on the surface, but a 1% monthly churn rate (roughly between AT&T and T-Mobile’s rates in Q2) translates into a roughly eight year average customer lifecycle. So for all the fuss in the industry about competition and the way in which T-Mobile is gaining subscribers at the expense of the others, the vast majority of subscribers stay put with their carrier every month, and over 85% stay put every year. The reality is that the differences in network performance and other factors between the carriers have shrunk over time, and the plans they offer make it relatively difficult to compare prices directly, which makes most people reasonably content to stay where they are.

The one trigger for switching is often a device upgrade, which as we’ve already seen is happening less frequently recently. But that’s not to say that major events can’t shake things up – this fall’s iPhone launch is expected to be a big seller, and we’ll see lots of efforts by the carriers to lock in iPhone upgraders with discounts and promotions when the device is released. We’ll likely see strong smartphone sales (perhaps the strongest since 2014) and higher switching in Q4 as a result, with some of that bleeding over into Q1, traditionally a relatively quiet quarter for switching.

Podcast: Microsoft Surface and Consumer Reports, NVIDIA Earnings, Google Diversity Memo

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell chatting on Consumer Reports decision to no longer recommend Microsoft Surface devices, analyzing NVIDIA’s earnings, and discussing Google’s controversial diversity memo and the issues it has raised for Silicon Valley.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

IoT Connections Made Easy

For long-time tech industry observers, many of the primary concepts behind business-focused Internet of Things (IoT) feel kind of old. After all, people have been connecting PCs and other computing devices to industrial, manufacturing, and process equipment for decades.

But there are two key developments that give IoT a critically important new role: real-time analysis of sensor-based data, sometimes called “edge” computing, and the communication and transfer of that data up the computing value chain.

In fact, enterprise IoT (and even some consumer-focused applications) are bringing new relevance and vigor to the concept of distributed computing, where several types of workloads are spread throughout a connected chain of computing devices, from the endpoint, to the edge, to the data center, and, most typically, to the cloud. Some people have started referring to this type of effort as “fog computing.”

Critical to that entire process are the communications links between the various elements. Early on, and even now, many of those connections are still based on good-old wired Ethernet, but an increasing number are moving wireless. Within organizations, WiFi has grown to play a key role, but because many IoT applications are geographically dispersed, the most important link is proving to be wide-area wireless, such as cellular.

A few proprietary standards such as Sigfox and Lora, that leverage unlicensed radio spectrum (that is, unmanaged frequencies that any commercial or non-commercial entity can use without requiring a license) have arisen to address some specific needs and IoT applications. However, it turns out traditional cellular and LTE networks are well-suited to many IoT applications for several reasons, many of which are not well-known or understood.

First, in the often slower-moving world of industrial computing, there are still many live implementations of, along with relatively large usage of, 2G networks. Yes, 2G. The reason is that many IoT applications generate tiny amounts of data and aren’t particularly time-sensitive, so the older, slower, cheaper networks still work.

Many telcos, however, are in the midst of upgrading their networks for faster versions of 4G LTE and preparing for 5G. As part of that process, many are shutting down their 2G networks so that they can reclaim the radio frequencies previously used for 2G in their faster 4G and 5G networks. Being able to transition from those 2G to later cellular technologies, however, is a practical, real-world requirement.

Second, there’s been a great deal of focus by larger operators and technology providers, such as Ericsson and Qualcomm, on creating low-cost and, most importantly, low power wide area networks that can address the connectivity and data requirements of IoT applications, such as smart metering, connected wearables, asset tracking and industrial sensors, but within a modern network environment.

The two most well-known efforts are LTE Cat M1 (sometimes also called eMTC) and LTE Cat NB1 (sometimes also called NB-IoT or Narrowband IoT), both of which were codified by telecom industry association 3GPP (3rd Generation Partnership Project) as part of what they call their Release 13 set of specifications. Cat M1 and NB1 are collectively referred to as LTE IoT.

Essentially, LTE IoT is part of the well-known and widely deployed LTE network standard (part of the 4G spec—if you’re keeping track) and provide two different speeds and power requirements for different types of IoT applications. Cat M1 demands more power, but also supports basic voice calls and data transfer rates up to 1 Mbps, versus no voice and 250 kbps for NB1. On the power side, despite the different requirements, both Cat M1 and NB1 devices can run on a single battery for up to 10 years—a critical capability for IoT applications that leverage sensors in remote locations.

Even better, these two can be deployed alongside existing 4G networks with some software-based upgrades of existing cellular infrastructure. This is critically important for carriers, because it significantly reduces the cost of adding these technologies to their networks, making it much more likely they will do so. In the U.S., both AT&T and Verizon already offer nationwide LTE Cat M1 coverage, while T-Mobile recently completed NB1 tests on a live commercial network. Worldwide, the list is growing quickly with over 20 operators committed to LTE IoT.

In fact, it turns out both M1 and NB1 variants of LTE IoT can be run at the same time on existing cellular networks. In addition, if carriers choose to, they can start by deploying just one of the technologies and then either add or transition to the other. This point hasn’t been very clear to many in the industry because several major telcos have publicly spoken about deploying one technology or the other for IoT applications, implying that they chose one over the other. The truth is, the two network types are complementary and many operators can and will use both.

Of course, to take advantage of that flexibility, organizations also require devices that can connect to these various networks and, in some cases, be upgraded to move from one type of network connection to another. Though not widely known, Qualcomm recently introduced a global multimode modem specifically for IoT devices called the MDM9206 that not only supports both Cat M1 and Cat NB1, but even eGPRS connections for 2G networks. Plus, it includes the ability to be remotely upgraded or switched as IoT applications and network infrastructures evolve.

Like many core technologies, the world of communications between the billions of devices that are eventually expected to be part of the Internet of Things can be extremely complicated. Nevertheless, it’s important to clear up potential confusions over what kind of networks we can expect to see used across our range of connected devices. It turns out, those connections may be a bit easier than we thought.

Podcast: SIGGRAPH AMD and nVIDIA, Apple and Tesla Earnings

This week’s Tech.pinions podcast features Ben Bajarin, Jan Dawson and Bob O’Donnell discussing graphics and AI-related announcements from AMD and nVIDIA made at the SIGGRAPH convention, and the earnings reports from Apple and Tesla.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Microsoft Stores are a Big Missed Opportunity

The latest Microsoft earnings results were a stark reminder that the consumer market makes only a marginal contribution to the overall revenue. Many believe consumers are not a priority for Microsoft and struggle therefore to understand the role of the Microsoft stores. Microsoft should admit they were an experiment. An experiment that failed and that it’s time to close them.

I believe it would be a mistake.

I also believe Microsoft does care about consumers; it just struggles to show it, especially when it comes to apps and services.

Microsoft is the exact opposite of Apple in the balance between enterprise and consumer. Apple goes out of its way not to come across as an enterprise company while Microsoft goes out of its way to always put enterprises first. In reality, both companies care about both markets and, more importantly, both companies need both markets!

When it comes to their retail presence, the two companies share similar goals. While it is not something Microsoft would admit to, creating an Apple store experience was the goal when they first opened their stores. Any tech company looking to have a retail presence should have Apple as a benchmark.

Aside from the short period when John Browett ran Apple’s retail business, Apple’s stores have always been about using great customer care to enhance brand loyalty. Apple stores are without a doubt one of Apple’s strong marketing assets aside from a solid revenue generator. People go into the stores to experience new devices, seek help with the ones they own and learn how to get the most out of them. Exchanges that I have often witnessed in stores, both in the US and in the UK where I lived, have been of customers met with knowledgeable and invested employees who made each customer feel they cared.

Microsoft has failed thus far to create an in store experience that is helping its brand. Calling it quit now, however, would be the wrong thing to do. Microsoft has never had this much to offer to consumers from an end to end experience. This need to experience – not try before you buy but truly experience – will grow with ambient computing, making a store presence even more valuable.

A Showcase for the Surface Portfolio and Microsoft Apps

Microsoft now has a full portfolio of Surface products that can be experienced in store. On display are not just the products but the vision that Microsoft has of modern computing. From Surface Pro to Surface Book, to Surface Laptop and the more aspirational Surface Studio and Surface Hub all help to tell that story. I was in a store with my daughter recently for a coding camp and seeing how the kids were drawn to the Hub made me wonder why there were not more people in the store doing just that. I am sure there are differences in locations as far as how busy the stores are, but more of a push around devices and experiences could certainly create more buzz.

Back in 2015, Microsoft CEO Satya Nadella said: “we want people to love Windows 10 not just use it.” The same should be said about all Microsoft products including the stores.

Activities in stores have been growing. I have seen more emphasis around STEM as part of the recent education push including Minecraft coding. Yet, more could be done around new apps like Story Remix, or People, or Paint 3D. Stores should have classes to learn how to use these apps, have people in stores using them as customers come in and have them try. This kind of activities will help create a different atmosphere in the store and educate potential customers. It would also help consumers to think more broadly about Microsoft.

Discoverability of new Windows 10 features remains an issue, especially for those consumers who upgraded to it on their old computers. Seeing what is possible might generate an upgrade opportunity and one that will benefit Surface. Surface Pro sales have been growing steadily in the enterprise market but not as much as they could in the consumer one. While many point to cost as an inhibitor, the real issue is the lack of visibility. Many other PC manufacturers have devices at similar price points, and of course Apple does too, so, clearly, there is a consumer market for Surface as well if mass market consumers knew more about it.

A Look to the Future to build Love for the Brand Today

Microsoft is no longer limited to Windows on PCs, and while Cloud and Office365 might be the biggest revenue generators, there are other products that will define the future of computing.

HoloLens stands out.

Enterprises are very interested in HoloLens as there are many applications that can save cost, increase productivity and enrich experiences. Yet, HoloLens has many consumers applications too which could generate reinvigorate the in-store experience. Think about Holographic Minecraft or a walk on Mars. I realize this is still a device that has limited availability and Microsoft might have concerns about dumbing down the experience making it feel like a VR park. Yet, there are opportunities to offer targeted events, limited in numbers that consumers could sign up to.

Microsoft effort to democratize 3D could be another area of focus with classes targeted on developing an object with Paint 3D and then printing it. Again, I realize the delicate balance between creating a buzz and creating a circus, but right now stores have very little buzz.

The big point about Apple stores is that they are first and foremost great experience centers. Microsoft stores feel more like a cross between an IT support center and a Best Buy where I go to buy as a last resort. I go in and get out as quickly as I can. My experience is that Microsoft stores staff is there to sell not to guide me and facilitate my discovery of what Microsoft has to offer.

Creativity is the new productivity is a great slogan for Windows and Microsoft should really look at becoming more creative when it comes to the stores.

Microsoft must deliver a consistent experience across stores focused on a shift from serving customers in a transactional exchange to facilitating customers’ experiences. This might require a change in how stores are evaluated and rewarded. Revenue should not be the short term focus but rather brand awareness and advocacy which in turn will bring increased revenues over time.

Smarter Computing

Work smarter, not harder. That’s the phrase that people like to use when talking about how being more efficient in one’s efforts can often have a greater reward.

It’s also starting to become particularly appropriate for some of the latest advances in semiconductor chip design and artificial intelligence-based software efforts. For many years, much of the effort in silicon computing advancements was focused on cramming more transistors running at faster speeds into the same basic architectures. So, CPUs, for example, became bigger and faster, but they were still fundamentally CPUs. Many of the software advancements, in turn, were accomplished by running some of the same basic algorithms and program elements faster.

Several recent announcements from AMD and nVidia, as well as ongoing work by Qualcomm, Intel and others, however, highlight how those rules have radically changed. From new types of chip designs, to different combinations of chip elements, and clever new software tools and methodologies to better take advantage of these chip architectures, we’re on the cusp of seeing a whole new range of radically smarter types of silicon that are going to start enabling the science fiction-like applications that we’ve started to see small glimpses of.

From photorealistic augmented and virtual reality experiences, to truly intelligent assistants and robots, these new hardware chip designs and software efforts are closer to making the impossible seem a lot more possible.

Part of the reason for this is basic physics. While we can argue about the validity of being able to continue the Moore’s Law inspired performance improvements that have given the semiconductor industry a staggering degree of advancements over the last 50 years, there is no denying that things like the clock speeds for CPUs, GPUs and other key types of chips stalled out several years ago. As a result, semiconductor professionals have started to tackle the problem of moving performance forward in very different ways.

In addition, we’ve started to see a much wider array of tasks, or workloads, that today’s semiconductors are being asked to perform. Image recognition, ray tracing, 4K and 8K video editing, highly demanding games, and artificial intelligence-based work are all making it clear that these new kinds of chip design efforts are going to be essential to meet the smarter computing needs of the future.

Specifically, we’ve seen a tremendous rise in interest, awareness, and development of new chip architectures. GPUs have led the charge here, but we’re seeing things like FPGAs (field programmable gate arrays)—such as those from the Altera division of Intel—and dedicated AI chips from the likes of Intel’s new Nervana division, as well as chip newcomers Google and Microsoft, start to make a strong presence.

We’re also seeing interesting new designs within more traditional chip architectures. AMD’s new high-end Threadripper desktop CPU leverages the company’s Epyc server design and combines multiple independent CPU dies connected together over a high-speed Infinity Fabric connection to drive new levels of performance. This is a radically different take than the traditional concept of just making individual CPU dies bigger and faster. In the future, we could also see different types of semiconductor components (even from companies other than AMD) integrated into a single package all connected over this Infinity Fabric.

This notion of multiple computing parts working together as a heterogeneous whole is seeing many types of iterations. Qualcomm’s work on its Snapdragon SOCs over the last several years, for example, has been to combine CPUs, GPUs, DSPs (digital signal processors) and other unique hardware “chunks” into a coherent hole. Just last week, the company added a new AI software development kit (SDK) that intelligently assigns different types of AI workloads to different components of a Snapdragon—all in an effort to give the best possible performance.

Yet another variation can come from attaching high-end and power demanding external GPUs (or other components) to notebooks via the Thunderbolt 3 standard. Apple showed this with an AMD-based external graphics card at their last event and this week at the SIGGRAPH computer graphics conference, nVidia introduced two entries of its own to the eGPU market.

The developments also go beyond hardware. While many people are (justifiably) getting tired of hearing about how seemingly everything is being enhanced with AI, nVidia showed a compelling demo at their SIGGRAPH press conference in which the highly compute-intensive task of ray-tracing a complex image was sped up tremendously by leveraging an AI-created improvement in rendering. Essentially, nVidia used GPUs to “train” a neural network how to ray-trace certain types of images, then converted that “knowledge” into algorithms that different GPUs can use to redraw and move around very complex images, very quickly. It was a classic demonstration of how the brute force advancements we’ve traditionally seen in GPUs (or CPUs) can be surpassed with smarter ways of using those tools.

After seeming to stall for a while, the performance requirements for newer applications are becoming clear—and the amount of work that’s still needed to get there is becoming clearer still. The only way we can start to achieve these new performance levels is with the types of heterogeneous chip architecture designs and radically different software approaches that are starting to appear.

Though some of these advances have been discussed in theory for a while, it’s only now that they’ve begun to appear. Not only are we seeing important steps forward, but we are also beginning to see the fog lift as to the future of these technologies and where the tech industry is headed. The image ahead is starting to look pretty good.

Podcast: AMD Earnings, Microsoft AI Silicon, Samsung, Apple Plants

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing AMD’s quarterly earnings, Microsoft’s announcement of custom AI-enabled chip for the next HoloLens, Samsung earnings, and rumors of Apple building three manufacturing plants in the US.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Power of Amazon Prime Beyond Shipping

Amazon CEO Jeff Bezos famously talks about the company’s Prime subscription service as an important part of its “flywheel” strategy, through which customers become increasingly tied into Amazon’s ecosystem and end up becoming more loyal and higher spending customers. The chief benefit of the Prime subscription has always been sold as free two-day shipping, but of course, the list of features the service offers has long since grown beyond that to include video and music streaming, access to books and magazines, photo storage and more. Now, it’s even being used as a foundation for selling additional third party subscriptions like TV bundles. It’s increasingly clear that, though the primary purpose of Prime may be selling more goods on Amazon.com, it’s becoming a very powerful platform for selling other things too.

Amazon’s Growing Share in Streaming Music

Though I think the Prime perk that’s most often talked about beyond shipping is video, it’s fascinating to see what Amazon has been able to achieve in music, in large part by offering a limited selection of music for streaming as part of the Prime subscription. Though other streaming music services offer 30-40 million songs, Amazon offers a subset of two million through its Prime Music service, and that’s been a popular option. Media recently reported that Amazon now has the number three position in streaming music behind Spotify and Apple Music globally, through a combination of the limited Prime Music service and its separate Music Unlimited service. My own recent surveys suggest roughly one in six Prime subscribers in the US use the music feature at least monthly, and I would bet that Echo adoption plays a role in that, given that Prime Music is integrated into the Alexa function. That’s roughly half the rate of adoption of its video service after a much shorter time in the market.

In Video, Expanding Beyond Competing with Netflix

Speaking of Prime Video, Amazon has invested heavily in the service in recent years, upping its original content spending and competing with Netflix in the catalog-based streaming space. It’s even expanded in Netflix-like fashion to many other countries around the world, though in practice its catalog remains very limited outside of a few key markets. But the more interesting part of its recent video strategy has been its creation of the Amazon Channels service, which allows Prime subscribers to bolt on monthly subscriptions to various channels, from premium networks like HBO and Showtime to niche and foreign content. Recent figures reported by BTIG Research suggest that Amazon alone may be responsible for a significant chunk of the subscribers for standalone streaming services like HBO Now through this channel. The combination of its own video service and these third party services into a bundle creates a pretty unique offering in the market, something really only matched indirectly by the subscription model offered by Apple’s App Store, albeit without a first party subscription as part of the bundle.

Other Features Also Get Usage From Smaller Segments

Though video and music are the most popular features beyond free shipping, others such as the free access to books and magazines through Prime Reading and the Photo Storage offerings are also used by 10% or more of Prime subscribers in the US. Applied to the likely 80 million plus subscribers Amazon now has globally, that means Amazon is becoming a meaningful player in a number of secondary markets almost incidentally, threatening standalone players who make their whole businesses out of providing similar offerings. Most importantly, Amazon doesn’t need to make any money directly from any of these services – indeed, it likely loses quite a bit of money on its video and music offerings in particular, simply because the benefits of increased stickiness on spending on Amazon.com outweigh any costs.

Implications in Messaging, Healthcare, and Beyond

This week, Amazon was reported to have created a secret group to work on healthcare projects including electronic medical records and telemedicine, while Amazon also recently created calling and messaging apps for its Echo devices and the accompanying Alexa apps. Though it would be tempting to write Amazon off as having no basis on which to build either of these businesses – after all, it’s historically served households rather than providing personalized services to individuals – the businesses it’s built in video, music, and beyond suggest that we should never underestimate Amazon to build new businesses off the back of its Prime subscription base. That doesn’t mean it will always be successful – its Fire Phone was a huge flop, after all – but it does mean that in the right business segments it has a decent shot at building a meaningful subscriber base for new services as a side effect of its investment in the Prime flywheel.

The Value of Limits

No one likes to think about limits, especially in the tech industry, where the idea of putting constraints on almost anything is perceived as anathema.

In fact, arguably, the entire tech industry is built on the concept of bursting through limitations and enabling things that weren’t possible before. New technology developments have clearly created incredible new capabilities and opportunities, and have generally helped improve the world around us.

But there does come a point—and I think we’ve arrived there—where it’s worth stepping back to both think about and talk about the potential value of, yes, technology limits…on several different levels.

On a technical level, we’ve reached a point where advances in computing applications like AI, or medical applications like gene splicing, are raising even more ethical questions than practical ones on issues such as how they work and for what applications they might be used. Not surprisingly, there aren’t any clear or easy answers to these questions, and it’s going to take a lot more time and thought to create frameworks or guidelines for both the appropriate and inappropriate uses of these potentially life-changing technologies.

Does this mean these kinds of technological advances should be stopped? Of course not. But having more discourse on the types of technologies that get created and released certainly needs to happen.

Even on a practical level, the need for limiting people’s expectations about what a technology can or cannot do is becoming increasingly important. With science-fiction-like advances becoming daily occurrences, it’s easy to fall into the trap that there are no limits to what a given technology can do. As a result, people are increasingly willing to believe and accept almost any kind of statements or predictions about the future of many increasingly well-known technologies, from autonomous driving to VR to AI and machine learning. I hate to say it, but it’s the fake news of tech.

Just as we’ve seen the fallout from fake news on all sides of the political perspective, so too are we starting to see that unbridled and unlimited expectations for certain new technologies are starting to have negative implications of their own. Essentially, we’re starting to build unrealistic expectations for a tech-driven nirvana that doesn’t clearly jibe with the realities of the modern world, particularly in the timeframes that are often discussed.

In fact, I’d argue that a lot of the current perspectives on where the technology industry is and where it’s headed are based on a variety of false pretenses, some positively biased and some negatively biased. On the positive side, there’s a sense that technologies like AI or autonomous driving are going to solve enormous societal issues in a matter of a few years. On the negative side, there are some who see the tech industry as being in a stagnant period, still hunting for the next big thing beyond the smartphone.

Neither perspective is accurate, but ironically, both stem from the same myth of limitlessness that seems to pervade much of the thinking in the tech industry. For those with the positive spin, I think it’s critical to be willing to admit to a technology’s limitations, in addition to touting its capabilities.

So, for example, it’s OK to talk about the benefits that something like autonomous driving can bring to certain people in certain environments, but it’s equally important to acknowledge that it isn’t going to be a great fit for everyone, everywhere. Realistically and practically speaking, we are still a very long way from having a physical, legal, economic and political environment for autonomous cars to dramatically impact the transportation needs of most consumers. On the other hand, the ability for these autonomous transportation technologies to start having a dramatic impact on public transportation systems or shipping fleets over the next several years seems much more realistic (even if it is a lot less sexy).

For those with a more negative bias, it’s important to recognize that not all technologies have to be universally applicable to make them useful or successful. The new relaunched Google Glass, for example, is no longer trying to be the next generation computing device and industry disruptor that it was initially thought to be. Instead, it’s being focused on (or limited to) work-based applications where it’s a great fit. As a result, it won’t see the kind of sales figures that something like an iPhone will, but that’s OK, because it’s actually doing what it is best designed to do.

Accepting and publicly acknowledging that certain technologies can’t do some things isn’t a form of weakness—it’s a form of strength. In fact, it creates a more realistic scenario for them to succeed. Similarly, recognizing that while some technologies are great, they may not be great for everything, doesn’t mean they’re a failure. Some technologies and products can be great for certain sub-segments of the market and still be both a technical and financial success.

If, however, we keep thinking that every new technology or tech industry concept can be endlessly extended without limits—everything in my life as service, really?—we’re bound to be greatly disappointed on many different levels. Instead, if we view them within a more limited and, in some cases more specialized, scope, then we’re much more likely to accurately judge what they can (or cannot) do and set expectations accordingly. That’s not a limit, it’s a value.

The Two Increasingly Dominant Business Models in Consumer Media

One of the most fascinating things about the consumer technology industry is the range of business models in evidence among the various companies. Though software may indeed be said to be eating the world, what’s fascinating to me is that almost no business models are based on selling software. Instead, we’re seeing the rise of two dominant business models in almost all of consumer digital media: subscriptions and advertising. And as these take over on the content side of the industry, they’re more likely to take increasing share of other parts of the industry including hardware as well.

Subscriptions Take Over Video and Music Consumption

The two best examples of this shift involve video and music consumption, which have both seen dramatic changes in the balance of spending coming from purchases and rentals versus subscriptions in the last few years. Take the US home video market, for example. The chart below shows how dramatically the digital revenue has moved from purchases and rentals to subscription streaming led by companies like Netflix over the last few years:

Whereas in 2011, over 70% of the spending was driven by purchases and rentals of video, by 2016 that balance had turned on its head, with a growing majority of spending going instead to streaming, with purchases still making up around 20% of the total and rentals dwindling almost to nothing. And of course, all this happened while the physical market was in rapid decline and the digital market in rapid growth, meaning that the underlying spending on subscription streaming grew far more rapidly.

A similar shift can be seen over the last few years in the music industry, where digital consumption has also eclipsed physical media, and where subscription streaming has come to drive the large majority of digital revenues:

Even including physical media, in the first half of last year, streaming accounted for nearly half of total consumption, and the vast majority of that came from subscriptions. Among digital revenue alone, it accounts for a significant majority.

Even in TV, These Two Models Vie for Supremacy

Even if we go beyond pure digital business models and look at more traditional TV business models, we see that these two models continue to vie for supremacy, with subscriptions arguably winning the upper hand lately as brands such as HBO eschew advertising, and as affiliate and retransmission fees rise. Whether consumers pay the subscription prices directly to the content owners or to intermediaries like pay TV providers, the dominant model – in the US at least – is a regular monthly payment for a bundle of content, with advertising providing a strong secondary source of revenue.

However, though TV ad revenue continues to grow for some networks, I suspect it’s nearing its peak, and the coming years will see a slow decline, driven by falling ratings, increasing avoidance of ads either through DVRs and other skipping technologies, lower ad loads on digital platforms, or simply engaging in other activity while ads play in the background. That’s going to drive the balance of revenues even more towards subscriptions, but the pay-per-view model continues to be a tiny minority of spending, leaving these two models dominant.

Online, Advertising Dominates

When it comes to other online content and services beyond video and music, advertising dominates, across news and other sites and services like search, email, maps, and so on, with free services provided as part of ecosystems monetized in other ways the main alternative. Getting consumers to pay subscription fees for online services continues to be very tough, and although minorities will do so either out of privacy concerns or principle, the vast majority use the free, ad-supported services.

Subscription Aggregation is Growing

As I’ve written previously in the narrower context of TV, the fragmentation of services offered in place of traditional big pay TV bundles is going to create an opportunity for aggregation and aggregators. But that opportunity goes beyond merely TV. We’re already seeing Apple and Amazon emerge as the leaders in subscription aggregation, with Amazon offering a huge range of subscriptions of many kinds through its Subscribe with Amazon service, and others more directly tied to a Prime subscription, and Apple offering both its own subscription services such as iCloud and Apple Music alongside third party subscriptions from MLB At Bat to Netflix through the App Store.

Amazon has tens of millions of Prime subscribers and is reportedly driving many of the signups for services like HBO Now through its platform, while Apple said on its last earnings call that it has over 165 million paid subscriptions combined for first party and third party services running through its payment systems. These companies are recognizing the rise of subscriptions as a business model and acting as storefronts for a wide range of services offering this business model.

The Subscription Model Will Expand

The next logical step is for subscriptions to expand into more areas which have historically had business models more oriented to one-off purchases. We’ve already seen this happen in the US smartphone market through the adoption of leasing and installment plans offered by the wireless carriers. Among these carriers, these models have now become dominant, as the chart below shows:

And of course we’re starting to see some device vendors jump on board with this model too, with Apple offering the iPhone Upgrade Program, which is essentially an iPhone subscription, and we’re seeing subscription models pop up for other hardware too. It’s only a matter of time before buying a bundle of hardware, software, and first and third party services become a big chunk of the market, further cementing the role of those companies that can offer such an aggregated service.

Advertising Will Continue to Offer an Alternative

Even though I believe that subscriptions will continue to grow in importance as a business model, that’s not to say ad-based business models will fade away. In fact, they’ll continue to offer an alternative for those who for financial or other reasons prefer to pay less for the hardware and services they use in exchange for being targeted with advertising. We might well see an income-based divide emerge on this basis, with lower-income people gravitating towards ad-based business models while wealthier sectors of society go further down the subscription path. This income divide is already evident among Amazon Prime subscribers and is likely to grow over time.

This means that the ecosystems which focus on subscriptions rather than advertising are ironically going to attract the customers advertisers would most like to reach, which will cause an interesting paradox and may both open opportunities for sub-based companies to offer limited advertising, while it may make it more challenging for ad-based business platforms to attract premium ads. All of this will continue to evolve and exist in a complex balance with a lot of mixed business models for years to come. But my bet is that subscriptions and advertising will come to be the two dominant business models in consumer technology, while subscription models dominate the high end of the market, with some really interesting implications.

How will Our Screen Addiction Change?

A Nielsen Company audience report published in 2016 revealed that American adults devoted about 10 hours and 39 minutes each day to consuming media during the first quarter of 2016. This was an increase of exactly an hour recorded over the same period of 2015. Of those 10 hours, about 4½ hours a day are spent watching shows and movies.

During the same year, the Deloitte Global Mobile Consumer Survey showed that 40% of consumers check their phones within five minutes of waking up and another 30% checks them five minutes before going to sleep. On average we check our phones about 47 times a day, the number grows to 82 times if you are in the 18-24 age bracket. In aggregate, the US consumers check their phones more than 9 billion times per day.

Any way you look at it, we are totally addicted to screens of any form, size, and shape.

While communication remains the primary reason we stare at our screens there are also tasks such as reading books or documents, playing card or board games, drawing and writing that we used to perform in an analog way that are now digital. And, of course, there is content consumption. All adding up to us spending more time interacting with some form of computing device than with our fellow human beings in real life.

I see three big technology trends in development today that could shape the future of our screen addiction in very different ways: ambient computing, Virtual Reality and Augmented Reality.

Ambient Computing: the Detox

Ambient computing is the experience born from a series of devices working together to collect inputs and deliver outputs. It is a more invisible computer interaction facilitated by the many sensors that surround us and empowered by a voice-first interface. The first steps of ambient computing are seen in connected homes where wearables devices function as authentication devices to enable experiences such as turning the lights on or off or granting access to buildings or information. The combination of sensors, artificial intelligence, and big data will allow connected and smart machines to be much more proactive in delivering what we need or want. This, in turn, will reduce our requirements to access a screen to input or visualize information. Screens will become more a complement to our computing experience rather than the core of it.

In order to have a feel for how this might impact average screen time, think about what a device such as a smartwatch does today. While a screen is still involved, it is much smaller and it shows the most important and valuable information to you without drawing you into the device. I often talk about my Apple Watch as the device that helps me manage my addiction. It allows me to be on top of things, without turning each interaction into a 20-minute screen soak. Another example is the interaction you might have with Google Home or Alexa when you inquire about something. Today, for instance, I asked for the definition of “cabana” as my daughter wanted to know. I got what I needed in less than 30 seconds: “a cabin, hut, or shelter, especially one at a beach or swimming pool.” Had she gone on Google Search to find the definition, I guarantee it would have taken a good 10 minutes between reading through the results and looking at pictures, with the effectiveness of the search not being any better because of the screen.

While not a total cure, ambient computing could provide a good detox program that will allow us to let go of some screen time without letting go of our control.

Virtual Reality: the Ultimate Screen Addiction

Virtual Reality is at the total opposite spectrum of Ambient Computing as it offers the ability to lose yourself in the ultimate screen experience. While we tend not to talk about VR as a screen the reality is that, whatever experience you are having, it is still delivered through a screen. A screen that rather than being on your desk, on your wall, or in your hand is on your face through a set of glasses of various shapes.

I don’t expect VR to be something we turn to for long period of times, but if we have ever complained about our kids or spouses having selective hearing when they are gaming or watching sports we got another thing coming!

There are talks about VR experiences that are shared with friends but if multiplayer games are something to go by, I am expecting those “share with friends moments” to be a minor part of the time you will spend in VR. With VR being so much more immersive I think the potential to be in an experience with someone you do not know like you do with traditional gaming, might be a little more involved and overwhelming. Coordinating with actual friends might require too much effort worth making if you are experiencing a music or sports event but maybe not so much if you are just playing a game.

Escapism will be the biggest drive for consumer VR which is the biggest reason for wanting to be cut off from reality.


Augmented Reality: the Key to Rediscovery

Augmented Reality is going to be big, no question about it. Now that Apple will make it available overnight to millions of iPhones and iPads as iOS 11 rolls out, consumers will be exposed and engaged with it.

What I find interesting is the opportunity that AR has to reconnect us to the world around us. If you think about the big excitement around Pokemon Go was that people went outside, walked around, exercised. Because humans do not seem to be able to do anything in moderation, that feel good factor vanished quickly as people were paying more attention to the little creatures than to where they were going culminating in incidents of trespassing, injuries, and fights!

That said, I strongly believe that there are many areas that AR can help with in our rediscovery of the world around us from education, to travel, to nature. Think about Google Translate and how it helps lower the barrier of entry to travel in countries where you do not speak the language.

The trick will be not to position the AR experience as the only experience you want to have. AR should fuel an interest to do more, discover more, experience more.

Of course, developers and ecosystem owners are driven by revenue rather than the greater good of humanity. Yet, I feel that lately the level of concerns around the impact technology is having on our social and emotional skills is growing enough to spur enough interest to drive change.

Ultimately, I believe that our addiction to screens is driven by a long list of wrong reasons. Our obsession feeds off our boredom, our feeling that idle time is unproductive time, and the false sense of safety of connecting to others through a device rather than in person. New technologies will offer the opportunity to change if we really want to.

Tech in the Heartland

Having just spent a few weeks vacationing in a part of the US where an abundance of corn and limestone-filtered water, along with a predilection for distilled beverages led to the creation of our country’s most famous native spirit—bourbon—I’ve regained a sense of life’s priorities: family, food and fun. (And for the record, the Kentucky Bourbon Trail is a great way to spend a few days exploring that part of the world—especially if you’re a fan of the tantalizing golden brown elixir.)

Of course, while I was there, I also couldn’t help noticing what sort of technology was being used (or not) and how people think about and use tech products in that part of the country.

Within the many distilleries I visited, the tech influence was relatively modest. Sure, there were several temperature monitors on the mash and fermentation vats, a few industrial automation systems, and I did see one control room with multiple displays and a single Dell server rack that monitored the process flow of one the largest distilleries, but all-in-all, the process of making bourbon is still decidedly old school. And for the record, that just seems right.

As with many traditional industries, the distilled spirits business has begun to integrate some of the basic elements of IoT technologies. I have no doubt that it’s modestly improving their efficiency and providing them with more raw data upon which they can do some basic analysis. But it also seems clear that there are limits to how much improvement those new technologies can make. With few exceptions, the tools in place appeared to be more focused on codifying and regulating certain processes than really driving major increases in production.

Ensuring consistent product quality and maximizing output are obviously key goals across many different industries, but the investments necessary to reach these outcomes and the return on investment isn’t necessarily obvious for any but the largest companies in these various industries. And that’s a challenge that companies offering IoT solutions are going to face for some time.

What became apparent as I observed and thought about what I saw was that the technology implementations were all very practical ones. If there was a clear and obvious benefit to it, along with a comfort factor that made using it a natural part of the distillation, production or bottling process, then the companies running the distilleries seemed willing to deploy it. And if not, well, that’s why there’s still a of traditional manufacturing processes still in place.

That sense of practicality extended to the people I observed as well. People I saw there were using products like smartphones and other devices as much as people on the coasts—heck, my 93-year-old mother-in-law has an Amazon Echo to play her favorite big band music, uses an iPad every day to play games, and maintains a Gmail account to stay in touch with her children, grandchildren, and great-grandchildren—but the emphasis is very much on the practical, tool-like nature of the devices.

I also noticed a wider range of lower-cost Android phones and less iPhones being used. Of course, much of that is due to income discrepancies. The median household income in the commonwealth of Kentucky is $43,740, which is 19% lower than the US median of $53,889 according to the latest US Census Bureau data, and almost half as low as the San Francisco county median income of $81,294. Given those realities, people in many regions of the US simply don’t have the luxury to get all the latest tech gadgets whenever they came out. Again, they view tech products as more practical tools and expect them to last.

There’s also a lot more skepticism and less interest in many of the more advanced technologies the tech industry is focused on. Given the limited public transportation options, cars, trucks and other forms of personal transportation are extremely important in this (and many other) part(s) of the country—I’m convinced I saw more car dealership ads on local TV and in local newspapers than I can recall seeing anywhere—but there’s absolutely zero discussion of any kind of semi-autonomous or autonomous driving features. People simply want good-looking, moderately priced vehicles that can get them from point A to point B.

In the case of AI and potential loss of jobs, perhaps there should be more concern, but from a practical perspective, the bigger worries are about factory automation, robotics and other types of technologies that can replace traditional manufacturing jobs, which are more common in many parts of middle America.

Also, the idea that somehow nearly everything will become a service seems extraordinarily far-fetched in places similar to where I visited. That isn’t to say that we won’t see service-like business models take hold in major metropolitan areas. However, it’s much too easy to forget that most of the country, let alone the world, is not ready to accept the idea that they won’t really own anything and will simply make ongoing monthly payments to untold numbers of companies providing them with everything they need via an equally large number of individual services.

As Facebook’s Mark Zuckerberg has started to explore, an occasional view outside the rose-colored perspective of Silicon Valley can really help shape your perspective on the real role that technology is playing (or might soon play) in the modern world.

What to Look For in the Q2 Earnings Season

Another earnings season is upon us: the public tech companies will begin reporting second calendar quarter results later this week, starting with Netflix on Monday and moving on to others like Qualcomm, T-Mobile, and Microsoft later in the week, with others to follow over the next couple of weeks. Here’s what I’m going to be looking for – and what I suggest you look for – as some of the big companies report.

Alphabet

With Alphabet, I’m most curious about their commentary on YouTube and on programmatic advertising. These have both been big drivers of their ad revenue growth in recent quarters, and yet both were potentially hit by the boycott and more broadly the rethink by brands about the lack of control over where their ads appear. The boycott started late in Q1 and Alphabet management said it had little effect then, but there’s been some recent evidence of a pullback in programmatic spending. I’ll also be looking to see if recent stronger revenue growth in ad revenue from non-Google sites continues or falls back to earlier levels – even if it continues at current levels, it’s still growing way more slowly than ad revenue from Google’s own sites. Lastly, I’ll be listening for commentary on the earnings call about the cloud business – there have been signs recently that Google is willing to push harder to grow this business, which remains much smaller than major competitors Amazon and Microsoft’s equivalent businesses, and is barely discernible in Alphabet’s overall results.

Amazon

At Amazon, there have been recent signs of both a slowing in growth and margins in the AWS cloud business, so I’ll be looking to see whether those trends continue or whether things return to the previous steeper trajectory. In its larger e-commerce business there have also been recent signs of a slowdown, albeit to what are still very healthy levels of growth, while Amazon’s management hasn’t satisfactorily addressed either the causes or the prognosis on its calls. After a period of easing up on investment and driving more meaningful profits, it’s also seemed lately as if Amazon is priming for a bigger investment push and therefore thinner margins. Though many investors don’t seem to care much about short-term profits, the stock has responded positively to the growth and may pull back a little if signs of shrinking margins persist. The international business continues to be a drag on the overall business, and the only reporting segment that’s not profitable, and the last three quarters have seen deepening losses there alongside inconsistent profits in the domestic business, so that’s worth watching too.

Apple

Apple has now posted two consecutive quarters of revenue growth after nearly a year of declines, its first in many years, so all eyes will be on whether it continues that growth, and especially what happens with the iPhone, since that continues to be the biggest determinant of overall growth. Its guidance was for at least a billion dollars of year on year revenue growth, implying underlying iPhone revenue growth, with the higher end of guidance suggesting more material growth of over $3 billion. Beyond the iPhone, several of Apple’s other reporting segments have turned around recently, with the Mac line returning to growth over the last couple of quarters and the iPad revenue line (if not shipments) looking a little healthier, while Other Products (including both the Apple Watch and AirPods) has been a decent contributor. And of course Services continues to be a major driver of growth for the company and a significant contributor to overall revenues, and I would expect all those trends to continue. One of the things I’m most interested in is Apple’s guidance for the September quarter, because there’s arguably more uncertainty about this quarter than any other recent quarter based on the new iPhones Apple will announce a few weeks before quarter’s end. Given that the first ten days or so of sales of new iPhones normally fall in the September quarter, severely constrained supply on one or more new models, or a delayed launch, both of which have been reported, would materially affect its performance during that brief period. At the same time, Apple’s September quarter has been its most consistent recently, operating within a $4 billion band for the last three years, so that’s the baseline against which to measure any guidance.

Facebook

One thing we already know about Facebook’s results is that it has passed 2 billion monthly active users, and that total will be very close to its final total for the June quarter, putting it just above the prior trajectory for user growth from recent quarters. However, far more interesting will be what’s happening on the revenue side, since Facebook has been warning since late last year that ad revenue growth would slow materially due to saturating ad load in the News Feed, and yet there’s been little sign of that yet. We’re now getting into the portion of the year where that slowdown was to have begun, so I’ll be watching both the numbers and the commentary on the earnings call for signs of where that stands. Facebook has certainly been pushing ads to many new places both in the core Facebook app and others such as Messenger and Instagram lately, suggesting that it’s doing its best to keep the growth trajectory going. The other thing to look for is signs of Facebook’s big recent ramp up in spending on original video content, which began in earnest during Q2 and was predicted a little in the Q1 earnings call. With many recent reports about how much Facebook is spending and offering to spend, I’d expect this to be a big theme on the call.

Microsoft

Microsoft has returned to revenue growth over the last three quarters, through a combination of underlying organic growth, the lessening effects of new accounting introduced over a year ago around Windows 10, and the fact that the former Nokia phone business has shrunk so much that it’s no longer leaving such a big hole in the finances. However, while the Office and related productivity and business processes segment has been growing strongly, and the cloud business following close behind, the combination of hardware and Windows Microsoft lumps together in its More Personal Computing segment has been in decline. The effective death of the phone business and the boost to the Surface line in recent months should help start to turn this revenue line around a little, though there are still some big headwinds in the form of the changes to the Windows licensing model. Microsoft’s recent layoffs also suggest ongoing transitions from legacy to cloud based products and business models, which will continue to work their way through Microsoft’s finances for several more years. It’s also worth looking at capital expenditures, which have come down pretty meaningfully both in dollar terms and as a percentage of revenues over the last year or so, to see whether that trend may begin to turn around as Microsoft continues to invest in Azure and related cloud infrastructure.

Samsung

Samsung has already reported preliminary results for the quarter, and both revenues and profits will set all-time records,  and there won’t be nearly as many surprises as with the other companies discussed here. But what we don’t know yet is the composition of those results by segment, because Samsung saves that reporting for its final results. Based on past trends, it seems likely that the semiconductor division drove the vast majority of the growth in both revenues and profits, while there might have been some modest growth in the mobile division off the back of a strong Galaxy S 8 launch. The mobile division’s profits have been fairly flat lately even as the semiconductor division has taken off, and I would expect those trends to continue too.

Twitter

Twitter is in the middle of yet another reset, retiring a number of ad products and investing in others in a way which the company has said will dampen revenue growth in the near term while setting it up for longer-term growth and profitability, at least in theory. As such, I’d expect this to be something of a down quarter for Twitter, and the question is really just how bad the financial picture is both for this quarter and the outlook going forward. User growth is a more interesting point this quarter than it has been for a while, because Twitter launched its Lite product for emerging markets in Q2 and Keith Coleman, who runs product at Twitter, recently said it had driven significant growth in users in India, so we could get faster growth than we’ve seen in a long time, albeit in a very low ARPU market. This will be the last earnings call with Anthony Noto as CFO before he hands over to the new full-time CFO Twitter recently announced and Noto focuses on his other job as COO. I’d expect that move to prompt some questions on the earnings call about the stability of management and about Jack Dorsey’s two CEO jobs. But I’d also expect some real probing on the call about the longer-term trajectory and prospects investors can expect at Twitter.