Samsung and Microsoft Partnership Highlights Blended Device World

My, how things have changed. Not that many years ago Microsoft was trying to compete in the smartphone market and PCs were considered on their way out. Samsung was a strong consumer brand but had virtually no presence (or credibility) in the enterprise market.

Fast forward to today—well, last week’s Galaxy Unpacked event to be precise—and the picture is totally different. Microsoft CEO Satya Nadella joined Samsung Mobile CEO DJ Koh onstage to announce a strategic partnership between the companies that highlights what could (and should) prove to be a very important development, not only for the two organizations, but for the tech industry in general.

The partnership also signifies a profound shift in the landscape of devices, companies, platforms, and capabilities. By bringing together Samsung branded hardware and Microsoft software and services, the two companies have formed a powerful juggernaut that represents a serious threat to Apple and an oblique challenge to Google, particularly in the enterprise market.

To be clear, the companies have worked together in the past and some have argued that the exact details of the partnership remain to be flushed out. Fair enough. But when you put together the number one smartphone market share presence of Samsung with the deeply entrenched position of Microsoft software and cloud-based services, it’s not hard to imagine a lot of very interesting possibilities that could grow out of the new arrangement.

For one, it helps each company overcome long-running concerns that they’ve been missing out on important markets. Samsung has been chided for not having the software and services expertise and offerings of an Apple, which was theoretically going to make the Korean giant vulnerable as the hardware markets started to slow. On the other side, Microsoft rather notoriously failed to make any kind of dent in the smartphone market. Together, however, the complementary capabilities offered by the partnership give customers a wide range of powerful and attractive devices, along with leading edge services and software in the business world. Plus, the two companies don’t really compete, making the collaboration that much more compelling. The consumer story is clearly much tougher, but even there, Microsoft’s forthcoming game streaming services could certainly be an intriguing and compelling option for certain consumers. On top of that, the combination of Samsung and Microsoft is likely to attract interest from other third-party consumer services (Spotify or Netflix anyone?) that would be interested in joining the party.

But there are additional benefits to the partnership as well. For one, it clearly helps tie PCs and smartphones together in a much more capable and blended way. To Apple’s credit, their Continuity features that link iPhones, iPads and Macs in an organized fashion were the first to make multiple devices operate as a single entity. However, despite Apple’s overall strength, the percentage of people who only own Apple devices is actually pretty small—in the single digit percentage range according to my research. The percentage of people who have Windows PCs and Android-based phones, on the other hand, is enormous. Obviously, Samsung only owns a portion of that total, but it’s a big enough percentage to make for a very significant competitor.

More importantly, the combination of Microsoft and Samsung also further breaks down the walls between different operating systems and highlights the value the cloud can bring to a multi-device, multi-platform world. Samsung is still committed to Google’s Android as its smartphone OS, but by integrating things like Office 365, OneDrive and more into its devices, they are making life easier for people who spend much of their time in the Windows world. Conversely, Microsoft’s expanding efforts with their Your Phone app in Windows 10 highlight the effort they’re making to turn the process of using multiple devices into a more coherent experience. Unique Samsung-specific extensions promised for that app should make it even more compelling. For Google, the challenge will be continuing to build the presence of apps like G Suite and other enterprise-focused services in spite of its leading Android partner choosing Microsoft for some of its business-based offerings.

The deal extends beyond smartphones as well. Though Samsung has been tiny player in the Windows PC market for several years, they made a bit of a splash at the event by introducing the Samsung Galaxy Book, the first Windows-based always connected PC (ACPC) using Qualcomm’s third generation PC-focused processor the 8CX. While there are clearly still challenges in that space, the fact that it’s a Samsung, Microsoft, Qualcomm partnership in PCs exemplifies again how far the tech industry has evolved over the last several years.

We’re clearly still in the very early days of analyzing what the potential impact of the Samsung and Microsoft partnership will mean. But even a casual glance would suggest that there are very interesting things still to come.

Podcast: Samsung Unpacked, Note 10, Apple Card

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the announcements from Samsung’s Galaxy Unpacked Event, including the launch of the Note 10 line of phones, the debut of a Qualcomm-powered ACPC notebook, and their strategic partnership with Microsoft, and discussing the release and potential impact of the Apple Card credit card.

Wrong About Roku, Uber’s Losses, Apple Card Approvals

Wrong About Roku
One of the more interesting companies and stocks to watch has been Roku. In all honesty, I’ve been wrong about this company as my core thesis for the company was more on the negative side. Roku is interesting because, for now, their primary value proposition is the user interface. We can argue they are a platform, and that may be accurate as well.

If Roku had been stuck to streaming TV hardware like a separate set-top box, or HDMI stick, then I don’t think they would have anywhere near the success they have. What has changed Roku’s fortune has been their integration with TV brands where essentially they are becoming the default smart TV UI for a number of major TV manufacturers. The bullish view for Roku is that they are the Microsoft Windows or Google Android of Smart TVs.

While TV manufacturers may still offer some platform choice like Apple TV, or even Android TV, my gut tells me those hardware companies may still prefer to work with Roku because Roku is not playing favorites and being willing to customize and heavily partner with OEMs. Apple and Google have a specific agenda with their TV OSes while Roku’s agenda is available to be more partner and customer-centric since it’s essentially neutral.

Roku’s CEO Anthony Wood has been vocal that they are in the best position to benefit from the cord-cutting trend, and I’m beginning to come around to his position. I cut the cord earlier in the year and have had an overwhelmingly positive experience moving away from traditional cable. I tried this experiment in 2011, and it was miserable. This time is completely different.

I have an Apple TV. I use my streaming services on, and a TV with Roku integrated. They both have their advantages, but the different between Roku’s approach and Apple’s is quite different. While I still like many of the UI features of Apple TV, there is no doubt that Roku’s integrated software as the primary TV OS is much more clean, simple, and quick to get to the apps I want.

This gives me hope that as Apple can do deals with TV makers that they will see more success with tvOS. Even then, I think Roku is better positioned than I originally thought as they focus on being the smart TV platform which plays nice with everyone.

Uber’s Losses
Uber’s earnings made headlines on the back of their losses. No one who studies the fundamentals of Uber, or ride-sharing in general, is surprised that Uber is still losing a significant amount of money. Their strategy to subsidize the cost of a ride in order to be cheaper than traditional Taxi is no secret.

This strategy is critical to scale as they seek to acquire new users and provide a better service, including lower prices, to customers. While in Vegas for work recently, I took a taxi from my hotel to the airport just to see how much it would cost compared to Uber. Before getting in the Taxi, I looked at Uber to see what it would cost my estimate was just over $11. With the Taxi, I had no idea how much it would cost since the meter goes up as you travel and fees are added along the way. In the end, I ended up paying $24 dollars for the taxi ride to the Las Vegas airport. That was quite the price difference and enough to convince me never to use Taxies again.

The long game with Uber, or Lyft, or any rideshare, is a belief in Robo taxis. As rideshare companies eliminate human drivers, they will not lose as much money subsidizing the cost of rides and may even make money still being extremely cost-competitive against taxis. This is ultimately the death blow to taxis as those companies will likely not succeed moving to Robo taxis. Again, this is the long view of Uber and Lyft.

The data is positive though for Uber. Looking at research and customer behavior data, I have access to for Uber a few things stand out. Across Uber Rides, Uber Bikes, and Uber Eats the average customer is using Uber 6 times a month with a steadily increasing average cost per transaction now nearing $16 dollars. Customer retention is one of the more interesting data points as Uber customers spend more money with Uber the longer they are a customer. Dollar retention has been on a steady uptick from returning customers. Uber has also recently crossed the 30% mark of the population of US citizens who have used the service at least once.

The data suggests, Uber is well-positioned if they can just acquire the customer. That user growth, and costs to acquire new users, will the data points to keep an eye on going forward.

Apple Card Approvals
This morning CNBC published an interesting article digging into some more details of Apple Card. The report details Apple’s goal to make Apple Card available to as many of its customers (above the age of 18) as possible. That includes ways to approve customers with lower credit scores.

The report details a customer who did get approved with a FICO score of 620 who got approved but only for $750 with an interest of 23.99% which he said was lower than other credit cards. This is a really interesting approach and will be fascinating to see how it plays out. The potential here is that these customers are able to improve their credit scores and hopefully make better financial decisions. The downside, however, is what happens if poor credit customers end up being late regularly or possibly defaulting and then get hounded by debt recovery companies or painful experiences if their account goes delinquent. The risk is an experience like that turns a customer sourer on Apple because even though Apple is working with Goldman Sachs, they are still the front brand and may be blamed for negative experiences around Apple Card.

The report goes onto detail an earlier credit card concept Steve Jobs was involved in but ultimately didn’t do because he didn’t want Apple customers to face rejection over an Apple product. This attempt with Goldman Sachs seems designed to get as many approved as possible but, as I point out, that is not without risks.

This is one of the more interesting products to watch from Apple due to both its rewards, and potential upside, but because it also carries more risk than other products of Apple’s.

My 5G Explainer: There Will Be Five ‘Flavors’

If you’re in the telecom/wireless space, you’ve no doubt been asked by tech industry colleagues, or even by curious friends and family, about 5G: What is it, how will it be different than 4G, and when and where will it be available? It’s awfully difficult to give a clear and succinct answer. As evidence, I present AT&T’s August 6 press release announcing the availability of 5G in New York City: it serves a very limited area (which AT&T did not specify), and is only available to select enterprise customers. And, in the same announcement, AT&T said that “5G will be launched broadly over sub-6GHz in the coming months, with plans to offer nationwide 5G in the first half of 2020”. Huh?

So, dear Techpinions reader, I offer you the tech equivalent of a ‘summer beach read’ — the easiest way to understand, and then explain, what 5G will look like over the next couple of years. You can thank me now for laying it out in a way that eliminates the need to use the following terms: 5G SA/NSA, mmWave, sub-6, mid-band, 5G TF, 3GPP-based. This is also meant to help you ignore, and/or override, operators’ particular branding of their own version(s) of 5G, and (often obfuscating) marketing terms that they might use.

Put simply, there are going to be five distinct ‘flavors’ of 5G. Over time, the three main flavors will be overlapping (think Neapolitan ice cream), but for now they’re fairly distinct.

Three Main Flavors

5G+. These are 5G services that are based on the mmWave bands (above 6 GHz – today, mainly in the 28 GHz & 39 GHz bands, with additional bands coming later). This will likely be the fastest 5G service, but coverage will be very limited because a cell can only reach a few hundred meters and doesn’t do very well indoors. The best way to understand/explain: it’s like a ‘super Wi-Fi hotspot’. Expect to see this mainly in the city core, densely populated areas, and high-traffic venues (i.e. stadiums). Today, AT&T brands this service as 5G+ and Verizon as 5G Ultra Wideband.

5G. Over time, this will be the most common ‘flavor’ of 5G. The ‘vanilla’ of 5G, if you will. This is going to be the 5G we often see referred to as ‘sub-6 GHz’, combining operators’ spectrum holdings, ranging from 600 MHz up to 4.2 GHz (depending on the operator, and some future auctions). At these lower bands, speeds will not be as compelling as 5G+, but coverage will be broader. For example, Sprint’s 5G service (using its 2.5 GHz spectrum), which it has branded as ‘True Mobile 5G’ offers better coverage at launch than Verizon and AT&T’s mmWave services, but top speeds are lower.

There are a couple of nuances to understand here. First, this is the flavor of 5G that will look the most like 4G LTE, in that it will get steadily better over time. That’s because operators will be combining their spectrum bands to offer an increasingly effective combination of speed, coverage, and capacity. For example, New T-Mobile will be combining its 600 MHz spectrum (currently being built out for 5G) with Sprint’s 2.5 GHz spectrum…and over time will be migrating some of the spectrum being used for 4G LTE (such as 700/800/1900 MHz channels) over to 5G.

Second, phones will also be able to work across multiple 5G bands. For example, we expect that at least some of the phones AT&T introduces in 2020 will work in both the sub-6 GHz and mmWave bands, though the experience at the outset might be a little rough.

5G/4G+. These are services that aren’t officially 5G, but are being branded as 5G. You can thank AT&T, which basically took what the rest of the industry was calling ‘Gigabit LTE’, and called it ‘5G Evolution’. Lest you be tempted to fire off a nasty-gram to AT&T’s marketing team, the fact is that some of the best speeds consumers are experiencing with LTE get us into 5G ‘territory’. For example, some users of gigabit LTE are experiencing speeds in the 200-400 Mbps range, which is similar to the speeds of the 5G services Sprint has launched in four cities, to date.

This is not unlike what we experienced in the transition to LTE. In the early days, some of the best 3G services (HSPA+) were as good as, or better, than some of the initial LTE services. Over time, the lines between the 3G and 4G data experience became more distinct, and we expect the same for 4G/5G.

Two Additional Flavors

Now, the above represent the main three ‘flavors’ of 5G: gold-silver-bronze, chocolate-vanilla-strawberry (and, Neapolitan over time), etc. But there are two additional, fairly distinct flavors of 5G to add to our explainer.

5G FWA. These are 5G services used specifically for fixed wireless access, aimed at the residential broadband market. The one commercial service available today is Verizon’s 5G Home, available in parts of four cities. It uses the same spectrum as other flavors of 5G (in Verizon’s case, the same mmWave spectrum as that for 5G Ultra Wideband), but requires specific CPE for the home (rather than a phone) and does not feature mobility. In some cases (such as Verizon’s 5G Home), 5G FWA is aimed to compete directly with incumbent fixed broadband suppliers such as Xfinity Broadband (Comcast), while in other cases, 5G FWA is a more technically able and cost effective way of getting broadband to homes that are un-served or under-served by broadband today.

Industrial 5G/IIoT: This is a version of 5G focused on the enterprise market and for use by machines and other connected devices. There are strong use cases for the factory floor and manufacturing. One key item to understand here is that although there are critical elements of the next iteration of the 5G standard that are needed for Industrial 5G, this ‘flavor’ of 5G will represent a combination of numerous 4G, 5G, and even Wi-Fi services, including: CBRS (3.5 GHz), private (enterprise) LTE, LAA, and even Wi-Fi 6. You’ll be hearing more about this over the next couple of years.

In the early days of 5G, there will be very visible tradeoffs between coverage and speed. That distinction will erode over time, as operators built out 5G over numerous spectrum bands, migrate 4G channels to 5G, and introduce phones that are able to nimbly move between low, medium, and high band spectrum. It will also become a clever software and network tuning game, as operators strive to deliver the best combination of speed and coverage, given economics, capacity, and users’ context.

My Apple Card Conundrum

One of the more important products Apple has brought to market thus far in 2019 is the new Apple Card. This is a virtual as well as a real credit card that Apple created in partnership with Goldman Sachs Group. I have had the privilege of test driving the card for a bit and love its various reporting features and the way it allows me to become a more informed consumer by using its various analytical capabilities. Here is a short video about this card and its features if you are not up to speed on this new credit card service from Apple.

One of the major incentives to use the Apple Card is its daily cashback feature. Depending on what you buy, you can get cash back from 1-3% daily. There are other credit cards that offer cash back, but none give that back to the user on a daily basis. Apple Card can be tied to Apple Pay and makes Apple Pay more important to users too as it gives them these analytical features so they can stay much more up to speed on their spending and money management.

I personally try and not use too many credit cards for various reasons. As you may know, the more cards you have, the more it impacts your credit scores. I have two cards for the business, one tied to my bank and used as a debit card and then one that is for personal use and is tied to one of the airlines I fly the most so I can get mileage when making purchases. I fly a lot so earn a lot of miles by flying so I use my airline’s credit card the most for personal items, and that adds significant miles to my mileage balance. Over the years, I have used miles often for vacation flights and even products available through miles.

One of my other cards has significant travel insurance tied to it, which is also a very valuable perk of that card. I also get points from this card that pays off in gift cards often, which leads me to my Apple Card conundrum. Today I use Apple Pay more for convenience than for big purchases or big-ticket items. Those I do on the other two cards for the mileage points, or for travel for the insurance and points I get from it.

Apple’s 1-3% cashback on the Apple card is very tempting for use on larger purchases, but it comes with no miles, travel insurance, or gift points. While I do love its virtual and real credit functions and its analytical tools, I will, over time, need to weigh the true benefit of its cashback feature against the benefits of the other cards I use when it comes to big purchases and especially the miles I earn on my airline card.

I suspect I am not the only one with this conundrum. The airlines especially have been marketing their own credit cards for over a decade to entice flyers to earn miles and keep them loyal to their airline. And various cards from other companies have upped the ante in terms of other buying incentives and perks.

But Apple’s move to give cashback on a daily basis and adding the analytical tools to their card is going to put pressure on the airlines, and other credit card providers create programs similar to what Apple is providing.

I believe that Apple’s analytical tools are an industry game-changer. In a short time, I have used that feature, and it has become apparent that making a consumer more informed about their spending and real costs of a credit card is a really big deal. In fact, I predict this will become of the #1 thing people who use competing credit cards will be asking for their cards.

Apple’s daily cashback feature will also put pressure on those with cards that give cashback to follow suit. How fast they can respond with a competitive program is a big question. I understand that Apple had been working for a couple of years with Goldman Sachs Group to create an instant cashback program and that it was a relatively complex problem to solve. But this is a case that if Apple can do it, you can expect the major companies with cashback credit cards to try and do some kind of similar innovative program to stay competitive over time.

If I could have Apple add any other feature to this card, it would be to find a way to do a deal with the airlines to offer the option of airline miles like one can get from Chase Sapphire Preferred and Capital One Venture. If they added that perk, I would dump my airline card in a heartbeat and make the Apple Card my preferred card for travel and personal use. Not sure how possible that would be, but this would be on the top of my wishlist for future Apple Card features.

Regardless of my conundrum, I have no doubt that the Apple Card will become one of the only other credit cards I will use often. It will especially be valuable when I buy Apple products that get the most cashback in the program. Given Apple’s loyal installed base, I believe it will be a big hit with that crowd. It will also serve as a way to bring more people to the Apple stores, which is an important goal for the existence of this card. But Apple may need to do some aggressive marketing of this card to those especially who use airline mileage cards.

This new Apple Card, according to Bloomberg, sports that “the analysts see a 2023 earnings-per-share gain of just 1% for Apple, and 2% for Goldman, because the card has no fees, low-interest rates, and its profitability will likely trail the industry average. What the market “misses,” they say, is that by “layering in the benefit of shifting just 10% of U.S. hardware sales from third parties to the Apple stores, the profitability of the program increases from below average to average.”

Its dual goal to tie more people to Apple products as well as earn some profit will be interesting to watch as it has the potential to be another important product for Apple’s service business.

Samsung #Unpacked2019: Beyond the Note 10

For several years now, Samsung’s August Unpacked event has been bringing to the market the latest generation of their Galaxy Note and this week at the Barclays Center in Brooklyn was no different.

The formula was a familiar one,  with a focus on productivity and delivering a powerful device for Samsung’s most engaged and loyal customers. Ten years on from the original Note launch, that created the “phablet” category, a lot has changed in the market, and Samsung’s latest iteration of its flagship reflects such changes:

  • Most phones got larger, and while the Note started with providing a larger screen, the differentiation over time became the full experience centered around the screen rather than the screen itself. While some consumers might be happy to push size to the limit with the Note 10+ Plus and its 6.8” display, Samsung thought well that some might prefer the smaller 6.3” screen of the Note 10. When we consider that most of the competition, especially coming out of China, is choosing the larger size for their flagship products, one can see an opportunity for Samsung to attract a wider audience with a premium experience in a more mainstream size.
  • The addition of the smaller Note 10 and the wider range of colors for this year’s line up, also makes me feel that Samsung recognises that Note power users can be female too.
  • The Note family also gets a 5G variant, but wisely 5G it is not the default throughout the lineup. While Note buyers appreciate cutting edge technology, they are also technology-savvy users who understand the current coverage limitations of 5G. Having a variant for Verizon allows Samsung to please early adopters who are most likely on an annual upgrade cycle, as well as early mainstream who might be happy to wait till next year to embrace 5G.
  • Productivity takes on a broader meaning thanks to an upgraded SPen and DeX. Productivity is not just about traditional workflows and apps, it now embraces the creation of content that bridges the physical and digital.

In a way, I feel that the Note has grown up to really marry work and play in the most seamless way and not because of hardware, but because of software and services integration that this year also included Outlook, OneDrive, Your Phone and Link to Windows for a better smartphone to PC workflow.

Partnerships for a Best of Breed Ecosystem 

Probably the most interesting part of Unpacked2019 was the newly announced open collaboration between Samsung and Microsoft. A collaboration that started many years ago and culminated this week with Microsoft’s CEO Satya Nadella joining on stage Samsung Electronics’ CEO DJ Koh to talk about how together they can empower every person and every business to do their best work.

Marketing soundbite aside, the relationship between the two companies is full of potential. DJ Koh mentioned a few times the term “open collaboration,” and I have to admit I am not quite sure what it means. When it comes to Microsoft and Samsung working together, I think of their collaboration as highly complementary. Microsoft gets key software and services in the pockets of what eventually will be millions of users as the collaboration expands from the Note to other devices. Samsung gets from the pockets onto the desks of millions of people broadening the value delivered by their phones. Maybe open, in this instance, means transparent and purposeful. Samsung has no aspiration in the cloud business, and I am quite sure Microsoft has no aspiration to compete in the smartphone market.

The collaboration also included the reveal of the Galaxy Book S an ultra-slim always on, always connected PC built on Qualcomm’s Snapdragon solution. While the market has been moving slowly, there is no question that the future of PCs is connected and Samsung understands how to design for a highly mobile computing experience as well as how to work with carriers to bring it to market.

While some might look at the announcement as just marketing fluff, I think it creates a lot of upside for both brands. The opportunity in the short term is to create stickiness by widening the value each individual brand brings to the table. The phone drives more value because of the deeper and seamless connection to the PC, which in turns gets more value because of the phone and the cloud. We are talking about a seamless workflow which most would argue is only possible in an Apple ecosystem.

Longer-term, I would like this collaboration to bring more integrated experiences that highlight Microsoft’s cloud and intelligence. Think of Microsoft apps like Translator or Pix and how much bigger their uptake would be if they were embedded in the phone experience. On the enterprise side, there is also an opportunity for Cortana’s brain to add to Bixby’s voice given the former was told will never become a digital assistant in a traditional sense and the latter has been struggling to take off.

What is clear is that Samsung with Microsoft can pursue the US enterprise market more aggressively both with hardware – phones, PCs and wearables – and solutions, potentially helping to compensate for a saturated consumer market.

What About Google? 

Some event commentators pointed out that Google and Android were two names that were not mentioned during Unpacked and wondered what could be read into it.  Microsoft and Google serve different purposes in my mind. In a way, it seems that Samsung is decoupling the OS they use from the ecosystem they want to build through partnerships that at times might compete with Google while still benefitting Android. Google’s relationship remains key to Samsung when it comes to operating system and consumers services. For Google, Samsung remains the leading Android brand and a technology partner that will bring smartphones into 5G and foldable designs. The two companies, however, have more overlap in business aspirations than they did at the start of the smartphone market.

As Google gets more serious about their hardware, it is to be expected it will tighten the services and hardware offering, but this might not be appealing to those users, especially in enterprise, who are rooted in the Microsoft ecosystem and want to use an Android phone.

Google has always been quite platform agnostic when it comes to its services, but considering how much business overlap there is between Google Cloud and Azure and G Suite and Office365, it is unlikely we would see the level of integration Samsung and Microsoft are driving. I would welcome Samsung’s renewed collaboration with Microsoft as a value add to Android users who might otherwise think they need to look elsewhere for a seamless multi-device computing experiences.

 

Apple’s Strategy with Apple Card

I’ve had the opportunity to use the Apple Card as a part of a private invite/preview since Friday. While I don’t intend for this to be a review, what I do want to discuss is the strategic opportunity for Apple with Apple Card that living with it has caused me to observe.

Why a Credit Card From Apple?
Casual observers of Apple will remark that releasing a credit card is out of character from Apple. I disagree as forming an opinion on this matter depends entirely on what you believe Apple is as a company. Is Apple a computer company? Services company? Product company? Technology company? In my view, and I’ve written extensively on this, Apple is a customer experience company. If you view Apple as a company, who strives to look for product opportunities where customer experience is lacking, and they have an opportunity to solve some pain points for consumers, then any product category is not off-limits. This certainly extends to technology, but technology is simply an ingredient of the overall Apple process.

Apple Pay was always an interestingly positioned solution within Apple’s products because making payments are a part of daily life. The opportunity for Apple to insert themselves with their emphasis on customer experience into payments was clear with Apple Pay and extending into the banking arena is a natural progression.

The Strategy
From my own usage, having an Apple Card made me more intentional about using Apple Pay. While I was already a heavy Apple Pay user and the vast majority of places I shop take Apple Pay, I still did not use it 100% of the time. For whatever reason, at certain stores that accept Apple Pay, I still pulled out my credit card. Perhaps just a creature of habit. All of that changed once I got Apple Card. The 2% cashback on Apple Pay payments was the first start but seeing that daily cash show up on a daily basis was even more psychologically rewarding. Those two things have caused me to me only use Apple Pay now.

I’ve also moved all my Apple iTunes purchases, and subscriptions over to Apple Card to get the 3% on the >$100 a year I spend on app purchases, and subscriptions facilitated by Apple. Here again, the psychological benefit comes through as I started thinking about moving all my subscriptions, even ones I’ve set up outside the App Store, to go through the App Store so I can get 3% on a range of other subscriptions to news or media services. All of this is designed to incentivize me to go through Apple as the marketplace for commerce and services as much as possible.

Last quarter on their earnings call, Apple announced they now facilitate over 420 million subscriptions. While this number will continue to go up without Apple Card, I can imagine the portion of Apple’s base who do get an Apple Card will only help drive this number higher faster.

Ultimately, however, strategically, Apple Card is less about making money (on services) for Apple. The real strategic play here with Apple Card is driving customer loyalty even higher, creating more stickiness, and ultimately adding more customer value.

Apple likes to do things that send messages to their customers that staying, and investing in their ecosystem, will bring you exclusive advantages. Apple does a number of things that I view simply as software, or services, whose sole function is to increase customer value. These things aren’t about making money and more about providing a unique, differentiated, and exclusive customer experience. Apple Card falls into this category for Apple.

Apple’s Long Game and Financial Disruption
I firmly believe, Apple feels the old systems of banking are poised for disruption. I have a lot of friends who are investors in FinTech companies, and the more I dig into this space, the more I’m convinced that huge holes in the market of finance and investors are open to innovative solutions. Banking, commerce, transactions, investing, etc., are all on the cusp of being disrupted by technology that solves the major pain points that are glaringly obvious for many consumers.

The relationship with Goldman Sachs is an interesting one. This is not the company I would have expected Apple to partner with here, nor is it a company most consumers think about when they think about banking of financial products relevant to them. Both companies, Apple and Goldman Sachs, are playing a long game here as Apple (as I’ve long predicted) starts inserting themselves more directly into financial services, and Goldman Sachs wants to start to great more brand affinity for future consumer products and services[efn_note]Thanks to Apple’s privacy stance, Goldman Sachs is keeping all your data private and not selling a profile as some credit card companies do[/efn_note] . If my words here don’t make this glaringly obvious, at least on Apple’s part, just see how they position it on Apple’s own website.

Going even further, Apple is directly highlighting Goldman Sachs as their partner and positioning them in a positive way.

Every credit card needs an issuing bank. To create Apple Card, we needed a partner that was up for the challenge of doing something bold and innovative. Enter Goldman Sachs. This is the first consumer credit card they’ve issued, so they were open to doing things in a whole new way.

This wording is a demonstration of how Apple is planting the seeds for future disruption of financial services. And, one of my favorite sayings, that serves as a helpful barometer for disruption is “wherever unhappy customers are, the potential for disruption exists.” While a consumer may be content with their banks or financial services, I can’t imagine customer satisfaction is at all-time highs in that sector. There is much to be desired, and Apple Card feels like a step in the direction of raising the bar for customer experience and satisfaction when it comes to financial services.

The Apple Card Customer
What makes this play interesting is Apple is willingly competing with the points and rewards system with Daily Cash. And to be honest, for a lot of mainstream consumers that makes sense as a perk. There will certainly be those frequent flyers and travelers for whom airlines reward cards make the most sense simply due to how much they travel. But knowing most consumers in the US, balance multiple reward or points cards for different things, I would not be surprised if Apple Card becomes one of the cards consumers keep at their disposal to use in the scenario that gets them the most value.

Apple Card fits Apple’s underlying product strategy. While it competes with other credit cards, which offer similar things, it is the total experience and the sum of its parts that separate it from the pack. It is a fully integrated experience, something Apple does very well, and its deep integration with iPhone is a differentiator.

But ultimately, I think Apple will raise the bar here, and other credit card companies will likely feel pressured to step up their game. This, I think, is positive whether or not Apple gains a critical mass of customers for Apple Card. If they can help shift the industry to better practice, including privacy and security, and help create a better more inclusive financial experience for the consumers, then its a big win for consumers in general.

IBM Leveraging Red Hat for Hybrid Multi Cloud Strategy

While it’s easy to think that moving software to the cloud is old news, the reality in most businesses these days is very different. Only a tiny fraction of the applications that companies rely on to run their day-to-day operations operate in the cloud or have even been modernized to a cloud-native format.

In fact, at a recent cloud-focused analyst event, IBM pointed out that just 20% of enterprise applications are running in either a public cloud (such as AWS, Microsoft Azure, Google Cloud Platform, etc.) or private cloud. And remember, this is nearly fifteen years after cloud computing services first became publicly available with the launch of Amazon’s Web Services. It stands to reason, then, that the remaining 80% are old school, legacy applications that are potentially still in need of being updated and “refactored” (or rewritten) to a modern, flexible, cloud-friendly format.

This opportunity is why you see most enterprise-focused software companies still spending a great deal of time and money on tools and technologies to move business software to the cloud. It’s also one of the main reasons IBM chose to purchase Red Hat and is starting to leverage that company’s cloud-focused offerings. IBM has a very long history with enterprise applications through both its software and services businesses and, arguably, probably has more to do with the enormous base of traditional legacy business applications than any other company in existence.

To IBM’s credit, for several years now, it has been working to modernize the organization and its offerings. A key part of this has been an emphasis on cloud-centric services, such as its own IBM Cloud, as well as tools and services to migrate existing applications to the cloud. Red Hat’s OpenShift, which is an open source version of a Kubernetes-based container platform (a technology that sits at the heart of most cloud-native applications), is an essential part of that cloud-centric strategy.

Specifically, OpenShift, along with IBM’s new CloudPaks, can be used to help modernize legacy applications into a containerized, cloud native form, then deployed either in a private cloud, such as behind the firewall of a company’s own on-premise datacenter, in a hosted environment, or in one of several public clouds, including IBM’s own cloud offering. What makes the latest announcements most compelling is that OpenShift is widely supported across all the major public cloud platforms, which means that applications that are written or rebuilt to work with OpenShift can be deployed across multiple different cloud environments, including Amazon AWS, Microsoft Azure, Google Cloud Platform, IBM Cloud, and Alibaba.

In other words, by building the tools necessary to migrate legacy applications into a format that’s optimized for OpenShift, IBM is giving companies an opportunity to move to a hybrid cloud environment that supports public and private cloud, and to leverage a multi-cloud world, where companies are free to move from one public cloud provider to another, or even use several simultaneously. This hybrid multi-cloud approach is exactly where the overall enterprise software market is moving, so it’s good to see the company moving in this direction. To be clear, the transition process for legacy applications can still be long, challenging, and expensive, but these new announcements help continue the evolution of IBM’s cloud-focused positioning and messaging.

Of course, IBM also has to walk a fine line when it comes to leveraging Red Hat, because Red Hat is widely seen as the Switzerland of container platforms. As a result, Red Hat needs to reassure all its other cloud platform partners that it will continue to work equally well on them as it does on IBM’s own cloud platform. To that end, IBM is very clear about maintaining Red Hat as a separate, independent company.

At the same time, IBM clearly wants to better leverage its connection with Red Hat and made some additional announcements which highlight that connection. First, the company announced it was bringing a cloud native version of OpenShift services to the IBM Cloud, allowing companies that want to stay within the IBM world a more straightforward way to do so. In addition, the company announced it would be bringing native OpenShift support to its IBM Z and LinuxONE enterprise hardware systems. Finally, the company also debuted new lines of Red Hat-specific consulting and technology services through the IBM services organization. These services are designed to provide the skill sets and training tools that organizations need to better leverage tools like OpenShift. The journey from legacy applications to the cloud doesn’t happen overnight, so there’s a tremendous need for training to get businesses ready to make a broader transition to the cloud.

Of course, even with all the training and tools in the world, not all of the remaining 80% of traditional legacy enterprise applications will move to the cloud. For many good reasons, including regulatory, security concerns, and unclear ROI (return on investment), certain applications simply won’t become cloud native anytime soon (or ever). There’s no doubt, however, that there is a large base of legacy software that is certainly well-suited to modernization and adaptation to the cloud. Not all of it will be able to leverage the new IBM and Red Hat offerings—there are quite a few aggressive competitors and other interesting offerings in this space, after all—but these moves certainly highlight the logic behind IBM’s Red Hat purchase and position the company well for the modern hybrid multi-cloud era.

Apple’s Bullish Position on Augmented Reality

During last week’s earnings call, Tim Cook mentioned a couple of times how excited he is about products in the pipeline. While he did not reference anything specific, I am convinced that much of his excitement is around what they are doing in AR.

Of course they are doing exciting things with Mac’s, iPad’s, iPhones, Apple Watch, Apple TV and Services, etc. However, AR is a new frontier for Apple that at least on paper, promises great return for them in the future.

One of the best pieces I have seen written on the future impact of AR was done recently by Wired entitled “Mirrorworld.”

The article lays out quite well the the evolution of the three digital platforms that has and will drive our future:

“The first big technology platform was the web, which digitized information, subjecting knowledge to the power of algorithms; it came to be dominated by Google. The second great platform was social media, running primarily on mobile phones. It digitized people and subjected human behavior and relationships to the power of algorithms, and it is ruled by Facebook and WeChat.

We are now at the dawn of the third platform, which will digitize the rest of the world. On this platform, all things and places will be machine-­readable, subject to the power of algorithms. Whoever dominates this grand third platform will become among the wealthiest and most powerful people and companies in history, just as those who now dominate the first two platforms have. Also, like its predecessors, this new platform will unleash the prosperity of thousands more companies in its ecosystem, and a million new ideas—and problems—that weren’t possible before machines could read the world.

The article goes on to lay out how AR will be a key part of this third platform that digitizes the rest of the world and explains well why AR is important and will drive new innovation in the coming decade.

But this is the money statement in the article:

“Whoever dominates this grand third platform will become among the wealthiest and most powerful people and companies in history, just as those who now dominate the first two platforms have.”

It’s no wonder that Tim Cook and Apple officials, who know their AR strategy and, more importantly, know how they will implement AR into their eco system of products and services, are bullish these days.

After the WWDC keynote that Cook introduced AR Kit and spoke about their deep interest in AR, I had some time with Tim Cook at a private reception that evening. He had come early to the event, as did I, and was very open to spending some time with me to discuss AR. In fact, he was very animated when he shared his thoughts with me and even said that that for Apple “AR may be one of Apple’s biggest product contributions and successes in the future.”

Given their success with the iPhone, that was a pretty heady statement. But I could tell that he was sincere in his view and not being boastful or in Apple promo mode, but rather stating how important he saw AR to Apple’s future.

Many people have already experienced AR in some form. If you ever played Pokemon Go, you know how digital content can be superimposed on real life objects and spaces. The iPhone has many apps already that integrate AR into applications, such as the iKea app that lets you place virtual furniture into any rome in a house. Bit the apps are teasers.

Apple gave us another glimpse of AR being used in Apple maps at WWDC that will appear later this year. Instead of a flat 2D map we have today on IOS and Mac’s, these new maps are more 3D oriented with virtual data being superimposed on a person’s surroundings. This particular AR feature on their maps is best on an iPhone and iPad or their eventual AR glasses, and when walking instead of driving. But the short demo they showed was impressive and when AR is applied to maps, it will be a game changer in terms of personal navigation.

What is intriguing about Apple’s role in AR, is that they are most likely the company that will bring AR to the masses. There is a lot of work going on in AR from many companies but almost all are making AR devices in siloed efforts. While Google is the other company that could challenge Apple head on, the fragmentation of Android will make it harder for them to gain the kind of quick buy in where a IOS and its earlier iterations in most cases will allow for backward compatibility from any AR app or related solutions Apple brings to market. I do suspect that a special version of an iPhone that will be optimized for some type of AR glasses will be designed to maximize the experience. But, I also think that Apple will make AR glasses work with existing iPhones too, albeit without some extra AR capabilities that would come with an iPhone designed around glasses as an extension of the AR experience.

An iPhone will be the delivery system for most AR apps from Apple, but Apple clearly understands that some type of goggle or headset also needs to be part of their AR solution. They have many patents in the works on AR but the most recent patent applied for shows a mixed reality headset that tracks your whole face.

While it is impossible to completely decipher what Tim Cook and team are talking about when they say that they are excited about what is in Apple’s pipeline, I am convinced that the greatest excitement is around what they are doing in AR and how that will impact Apple’s longer term growth.

If Wired’s comment that the people or company who dominate AR “will become among the wealthiest and most powerful people and companies in history,” if accurate, Apple’s fortunes will be rising, not falling.

Podcast: T-Mobile, Sprint, Dish, Apple Earnings, Siri and Voice Assistant Recordings

This week’s Tech.pinions podcast features Carolina Milanesi, Mark Lowenstein and Bob O’Donnell discussing the merger of T-Mobile and Sprint, and the launch of Dish as a fourth US carrier, as well as how 5G impacts all of this; analyzing Apple’s latest earnings and what it means for the company’s strategy moving forward; and debating the monitoring of recordings made through Siri and other voice assistant platforms and what that says about the state of AI.

The Increasing Importance of Apple’s iPhone Trade-in Program

One of the many advantages Apple has versus its competitors in the smartphone market is the fact that its iPhones hold value longer than most other products in the market. As the overall market has matured, shipment growth has slowed, and prices of top-tier smartphones have increased, this advantage has become increasingly important. Apple even discussed it during its most recent quarterly earnings calls. Despite this attention, however, I believe competitors and investors still underestimate just how important the iPhone Trade-in Program is to Apple’s ability to drive new-phone purchases, secondary-market revenues, and continued installed base growth.

Driving Sales of New iPhones
During the recent earnings call, CEO Tim Cook answered a question about the effectiveness of the trade-in program in retail this way: “In retail, it was quite successful…And trade-in as a percentage of their total sales is significant, and financing is a key element of it.” A quick visit to Apple.com’s iPhone page or a carrier site such as Verizonwireless.com quickly demonstrates how crucial the trade-in programs have become: It’s the first item you’ll often see on the page.

On Apple.com if you click through to the trade-in-offer page, you’ll find that if you trade in an iPhone 8 you can get an iPhone XR for $479 (down from $749) or $20 per month with financing. Alternatively, you can get an iPhone XS for $729 (down from $999), or $31 per month. And it’s not just the latest iPhones that hold residual value. For example, the iPhone 6 will still net you $100 off those same phones. You can get similar deals at Verizon (the carrier also offers trade-ins for a handful of Samsung, Google, and LG phones). Of course, the other majors U.S. carriers are also offering comparable deals on their sites and in their stores.

These trade-in options, combined with financing, is how the industry moved to accommodate consumers when the subsidy model of mobile phone buying went away. Years ago, to capture the residual value of a phone, consumers had to resell the device themselves or trade-it-in to a third party such as Gazelle. Today, it’s pretty painless, and that removal of friction has helped make trade-ins a key element to most people’s phone-buying experience.

Driving Secondary-Market Revenues
Back in 2016, I talked about the fact that Apple was selling refurbished phones on Apple.com. At the time, the vast majority of phones Apple was selling itself were likely coming in via the iPhone Upgrade Plan that let customers turn in their iPhone for a new one every year. That program is still likely a source for refurbished phones on Apple’s site, but the dramatic increase in regular trade-ins is undoubtedly driving an increasing percentage of the volumes.

A quick spin through the refurbished phones available on Apple’s Web site is instructive. The company currently has a good selection of phones, including iPhone 7 Plus, 8, 8 Plus, and X in a range of colors. Alongside the refurbished price, Apple lists the new price, which makes it easy for buyers to see their savings. For example, a refurbished iPhone 7 Plus with 128GB of storage in Rose Gold sells for $569, or $100 less than the current new price. At the other end of the spectrum, a refurbished iPhone X with 256GB of storage in Space Gray sells for $899, or $150 off the new price.

Apple’s refurbished stock is collected, cleaned, and inspected, and each iPhone gets a new battery and shell, which makes Apple’s offerings notably better than some others out there. These expenses, combined with the acquisition cost (the trade-in value to the consumer), represent the cost to Apple to bring these refurbished products back to the market. The difference between these costs and Apple’s selling price is profit. Let’s use the iPhone X as an example. Apple offers $450 in trade-in value for this phone, pays the cost to refurbish the unit, and then resells it for $899. The company is making good money on its refurbished iPhone sales. And remember, this is the second time Apple has profited from the sale of this phone.

Positive Impact on Installed Base
The final positive aspect of Apple’s iPhone Trade-In Program and the sales of refurbished phones is that it helps to grow Apple installed base. As services become an increasingly large percentage of Apple’s revenue, I can’t overstate the importance of a strong and growing installed base for Apple. Tim Cook mentioned this on the earnings call, too.

“Installed base is a function of upgrades and the time between those,” Cook said. “It’s a function of the number of switchers coming into iOS, macOS—and so forth—tents. It’s a function of the robustness of the secondary market, which we think overwhelmingly hits an incremental customer.” He went on to note, “The secondary market is very key, and we’re doing programs et cetera to try to increase that because we think we wind up hitting a customer that we don’t hit in another way.”

In other words, every time Apple reclaims an iPhone from an existing customer who trades it in for a new one, the company not only retains that customer in the installed base, but it potentially adds a new one when it resells that refurbished phone. And because the refurbished phone costs less than a new phone, Apple is reaching more cost-constrained customers than in the past. This means there are more people in the iOS ecosystem to buy and use Apple’s current services such as iCloud Storage, Apple Pay, Apple Music, and News+, as well as upcoming services such as Apple TV Plus. These customers may not be as likely to spend freely as Apple’s traditional customers, but every new user is a potential source of additional revenue.

Bottom line, with the iPhone Trade-in Program Apple has rather masterfully addressed the inevitable challenge of a slowing smartphone market. It makes the high cost of acquiring a new iPhone more tenable, allows Apple to capture a good chunk of the residual value of selling an old iPhone, and it helps Apple to continue to build out the iOS installed base. That’s a win, win, win, and I expect to hear Apple talk even more about this going forward.

Apple Earnings: iPhone Stability, Wearable’s Future, and Services Ebbs and Flows

There are important trend lines to note as we unpack Apple’s latest earnings. Perhaps one of the more interesting observations was a dynamic I had not fully internalized before, which was the degree in which a good quarter for Apple impacts a number of other tech stocks. I read several investor notes that hinted at this but did not show exhaustive evidence, that when Apple does well, it seems to lift investor confidence and tech stocks, and sometimes the overall index seems to see a lift. This could be due to confidence, and Apple viewed as a bellwether for tech as a whole, or for other reasons, but I find that fascinating. A health Apple is helpful to a healthy tech ecosystem.

A few points are worth analyzing related to Apple’s latest earnings.

iPhone Stability
For iPhone, there were a few things that are worth noting. The first is what most news outlets pointed out which was iPhone revenue being less than 50% of Apple’s total revenue for the first time. This simultaneously a story and a non-story. It’s a story because it is notably the first quarter in many where this has happened. But, this will also only likely be a habit of a mid-year quarter. The Dec quarter will be a large iPhone quarter revenue-wise, as will the March quarter. The June and September quarters will see dynamics similar to this one where iPhone is closer to 50% for the foreseeable future.

The worry, I see, is the potential narrative to distort the incorrect narrative that Apple is the iPhone company. I’ve never viewed Apple this way, as I see Apple more as a customer experience company which can manifest itself in the shape of any number of products. The iPhone is simply the biggest hit Apple has had to date, but not necessarily its only massive hit. I worry when December and March quarters come around that the focus goes back to largely on iPhone and then we are back into confusing public narratives. The iPhone is great, but it isn’t the end of the story.

Taking the long view on iPhone, for a moment, it looks as though after a period of hyper-growth followed by an overall market slowdown, iPhone sales are stabilizing. This was the main thing I wrote about to watch when all the sky is falling articles were written as Apple reached peak iPhone. We knew it would decline, and the question was what number of annual sales would Apple stabilize at for iPhone. It looks like Apple’s annual iPhone sales will stabilize in the 189-200 million unit range for at least the next two years. This would indicate roughly a quarter of their base upgrading each year and around 4 to 4.5 years to refresh their base to modern technology and the average Apple user standardizing on around a three-year refresh cycle.

The only material possibility to change this is if some dynamics around trade-in or another stimulus from Apple can move their base to upgrade on a shorter cycle regularly, but I do not see that happening for the bulk of their customer base. From the data and sales trends we see now, it seems as though the nearly billion iPhone users out there have settled into predictable buying patterns and that is a good thing.

Wearables are the Future
I’ve been long Apple wearables since day one that I tried Apple Watch. It was clear to me back then Apple has a future with computers we put on our bodies and the wrist was the first bit of body real estate Apple tackled. AirPods are Apple’s play at the ears, and sometime in the next five years, we may see their first attempt to bring computers to our eyes. My theory all along is all these Apple wearables will create a complete system (or computer) and work as a synchronous whole going beyond even where iPhone could go.

Apple’s wearable business saw 48% growth with AirPods continuing to penetrate into Apple’s base and Apple Watch now on a continued growth trend. The one stat that stood out to me the most with Tim Cook’s commentary was how 75% of Apple Watch sales in the June quarter were to brand new Apple Watch owners.

We have studied Apple Watch owners greatly and know they are a loyal bunch, who also hold onto their Apple Watches a long time. Series 4 was the first Apple Watch that materially moved the base to a new Watch, so it is encouraging a good portion of quarterly sales are to new customers. The Apple Watch installed base is likely in the 60 million range by now via my model and growing.

With wearables, Apple is laying a new foundation for a new paradigm of computing.

Services Ebbs and Flows
Commentary around Apple’s services growth-focused more on it being a bit “light.” A key debate here is whether services have seasonality like much of Apple’s other product lines or whether we should assume some level of a bell curve trend. This clearly depends on the quality of Apple services and ability to compete, but beyond those things like overall subscription billions (now over 420 million) as well as the continued growth of iCloud an Apple Care will likely steadily contribute.

Apple didn’t seem to give us much about Apple News+, but I take the lack of commentary around conversion rates to suggest Apple News+ is not off to the start Apple hoped. When Apple Music came out, management was quick to give us some numbers around subscriptions and with the Apple News+ absence of that I take it as a negative.

We know Apple Card is coming out in August, and Apple Arcade and Apple TV+ will be launching in the fall. Commentary strongly suggests these will be paid services, likely charged a la carte to start, which is not a surprise. The service I am most interested in is Apple TV+, and beyond Apple Music, TV+ has the most potential upside of all Apple’s services in my mind. Consumers simply love entertainment content and video in particular.

Modeling services are going to remain tricky, and given the huge investment producing a quality, video is, we should assume Apple’s services gross margin will go down as they grow the video platform.

Lastly, on services, the points I made about iPhone stability and Apple’s commentary the installed base continues to grow, is important to the services narrative. iPhone customers continue to go deeper into the Apple ecosystem of products and services the longer they are customers. Which means as Apple grows its base, they have a highly engaged and easily accessible addressable market in their user base. This is not a dynamic I see of any other consumer tech company.

The biggest question for me is how far Apple takes their services beyond their ecosystem. As I pointed out last month, Apple’s inroads to India may be through their services, but those services need to all run on Android. Tim Cook’s commentary seemed to suggest, for now, that some Apple services will be cross-platform and others will be tied to their devices. I think this is the correct approach to start, but I’m not convinced this is the best strategy for the long-term. But again, this is a key debate.

Monitoring Government Antitrust Probe of Tech Companies

Like many in Silicon Valley, I have watched the most recent moves by the FTC and other government agencies that are probing some tech companies for antitrust violations.

I have a great deal of experience with the U.S. government going back to 1985 when I was asked to be the intermediary between the Defense Department and Intel. It is a long story, but in those days, outside of the government asking for Silicon Valley’s help on military projects, they had little contact with actual Silicon Valley leadership. I was known to them as a top analyst with relations with tech execs and was asked to help them connect with proper Intel execs about an issue that was highly private but had national interests in mind.

Over the years, I have served on presidential advisory councils and given feedback on tech issues to three government agencies. Have even advised some congregational leaders on tech issues.

From these years of experience with the U.S. government at many levels, I can say that they clearly do not understand technology, how it gets developed and more specifically, how technology actually works.

I will admit that over the last ten years, they have become a bit more savvy about tech issues, but they still do not really understand how technology is created and ultimately works.

The current quest to try and prove tech companies are in violation of very dated anti-trust laws will be difficult to prosecute. Anti-trust laws, as defined during the days of railroads and Ma Bell, are not easily applied to current tech companies when legitimate competition exists in many forms.

Even the angle that these companies need to be broken up is a stretch if traditional Anti-Trust laws are used to achieve this goal.

A lot has been written about this subject so I won’t go into the problems the FTC will have, or how the Tech companies can fight this under current laws.

But I do want to point out that our legislators really are clueless when it comes to the inner workings of tech, and more specifically, the potential ramifications of their actions and its impact on the economy.

Earlier this week, the San Jose Mercury had an editorial that points out this very issue:

“Let’s hope they don’t forget that innovation is at the heart of our economic growth. Our ability to remain a world power requires that we maintain a technological edge over China and our other global competitors.
Whatever the federal government does, it must maintain incentives for U.S. tech firms to keep spending on research and development, which is one of their primary tools to evolve and prepare for the future.

Tech leads U.S. companies in research and development spending. Amazon ($14.1 billion), Google ($10.15 billion), Apple ($7.65 billion), and Facebook ($4.76 billion) were among the top 10 investors in R&D in 2018. They are using those billions as a strategic weapon to win what could accurately be described as the World War of Artificial Intelligence.

Time is of the essence. The digital landscape of 2030 is likely to be fundamentally different than it is today. After all, consider how fast it’s changed in the past dozen years. As recently as 2007, MySpace dominated the social networking landscape, receiving more than 70 percent of all visits to social networks. Facebook, which was only three years old, was a distant second.

Those companies launched a wave of innovation that helped the United States emerge from the 2008 financial crisis and create what has been the longest bull market in history. Unfortunately, Big Tech’s dominance over those years has led to abuses that deserve greater federal regulation.

As a result, the Department of Justice and Federal Trade Commission now needs to rein in Amazon, Apple, Facebook, and Google. But, in the process, the feds must take extreme caution not to stifle innovation.”

I was in Washington last summer and made this exact case to legislatures that I met with. While I agreed that these companies might need oversight and in some cases, see their power monitored and even restrained when it makes sense, clipping their wings in a way that stifles their ability to innovate and help the U.S. keep ahead in areas like A.I., 5G, IoT, self-driving vehicles, etc. would be a mistake.

These technologies will power the U.S. economy for decades, and tech companies need to be free to drive this economic engine without heavy-handed and misguided government regulation.

As I stated earlier, most government legislators and officials don’t understand tech to the degree that they can truly legislate these issues. My fear is that they will handcuff some of the companies they target from inventing new technologies that will power our economy and keep us ahead of China and Russia.

Made By Google Might Finally Mean Business with Pixel 4

The Made by Google team surprised everybody on Monday when Brandon Barbello, a Product Manager for Pixel posted a blog sharing an early view of two features coming to the Pixel 4 expected to launch in the fall. This week’s blog adds to the confirmation of the Pixel 4’s existence tweeted by the head of Made by Google Rick Osterloh.

So, we now know there is a Pixel 4 coming int the fall, October is the month most people have their money on, and we also know a couple of features: motion sense and face unlock.

Motion Sense and Face Unlock

The Motion Sense feature builds on the work that Google has been doing for years under Project Soli. Motion Sense is a series of radar-based sensors that can track nearby movement. One use case shown in the video released with the blog is to use your hand to gesture a flick from right to left to change music tracks. Other use cases mentioned are snoozing alarms and silencing calls. But, possibly the most interesting use case, Barbello mentioned in the blog, is the ability for Soli to turn on Face Unlock after detecting your movement and intention to unlock the phone. Prepping the phone to scan your face and then unlock would then seem like a single seamless step to users. If you have been using Face ID on an iPhone, you know that you need to lift the phone to unlock it, but while these are two distinct steps, it hardly feels that way to me. What does sound appealing from the blog, is the claim that face unlock “works in almost any orientation – even if you’re holding it upside down.”

The speed and ease of use of face unlock does not come at the expense of security, points out Barbello, highlighting that users will be able to use it for secure payments and app authentication. We’ll see once it ships, but I am certainly excited to add this feature to Pixel because even if I have no complaints on the fingerprint scan of the Pixel 3 and 3a, I have come to appreciate Face ID on my iPhone and my iPad Pro.

Talking about security, Google points out that facial scans will not leave the phone and will be stored in the Pixel’s security chip Titan M.

Interestingly, on the same day, the blog was published, we also received confirmation that
Google is asking consumers in the streets to scan their faces in exchange for a $5 gift card. Google explained that the purpose of this face canvasing was to assure as broad a dataset as possible so that as many people as possible would be able to use face unlock on Pixel 4.

Why this Openness?

While the features Google shared in the blog are exciting, I am actually more excited in the approach that Google has taken with Pixel 4.

Over the years, I have been criticizing Made by Google for not being aggressive enough with their hardware strategy. Pixel was a step up from Nexus, but the first generation suffered from a small channel and very little advertising. Pixel 2 saw still a limited channel but more marketing dollars. Finally, Pixel 3 saw strong advertising and a broader go to market approach. Sales remain concentrated in mature markets, and overall market share remains limited, but Sundar Pichai called out on Alphabet’s earnings call last week the strong performance delivered by Pixel 3a.

Judging Pixel’s performance as a proportion of the overall market, however, is not the best way to assess how Google is doing. After all, Pixel is not intended to appeal to the whole breadth of the smartphone market. Google is interested in those consumers who can be highly engaged not just with the device but with Google services as well. This limits the addressable market both by geography and income.

Because of the target audience, I see Google’s best opportunity to drive sales resting in Samsung and Apple’s installed base. If we go by the schedule all companies have kept over the past couple of years, we know both Samsung and Apple will have new models in the market before Pixel 4 is out. Samsung’s Unpacked is scheduled for August 7, and here we are expected to see the new Galaxy Note, then Apple is expected in early September.

While some people were quick to speculate about Google’s lack of concern for cannibalizing Pixel 3 sales, I appreciated the attempt to get people’s attention before competitors drop their products. All Google needs to do is instill enough interest to get people to wait for Pixel 4 to launch before committing to the new Galaxy Note or the new iPhone. At the end of the day, if you are in the market for a Pixel now, and your purchase is not an emergency, you are most likely going to wait and see the new model and any price adjustments on the current one. So, really, no harm no foul on Pixel 3.

Android and Google

The Android ecosystem has changed quite a bit over the past couple of years. The initial blossoming of brands jumping on the opportunity Android offered to get to market faster and with limited investment was not enough to sustain brands that had been in the mobile phone market for decades.

Chinese brands started to grow presence at home and soon internationally, first in Asia and then in Europe. While benefitting the overall Android ecosystem, this growth did not necessarily benefit Google as many of the brands were working on their own ecosystem or in collaboration with more regional ecosystem owners. Pixel came to market to address this changing dynamic as well as provide the best experience Google has to offer by bringing hardware, software, and services altogether with the added value of intelligence.

In a way, the balance that Google has been trying to keep between fostering the Android ecosystem with partners and pursuing its own hardware ambitions has become a self-determined balance. Depending on the market you are in, you see Android more or less intertwined with Google. In markets such as the US and Europe, Google services are such as an integral part of the Android experience that it is hard for consumers to separate the two. These are the markets where a more aggressive strategy for Pixel has the potential to pay off. In markets where Google services are not available or not preferred it is going to be harder for Pixel to grow share, as the competition would mostly be on hardware against local vendors who have a different go to market strategy, time to market and a focus that is more “local” than global.

The current political climate that is putting a lot of uncertainty on Huawei is another reason why, for Google, Pixel has become a more significant need than before. Huawei posted good results just this week despite the current situation, but retailers in Europe have been concerned about inventory levels and increased weakness in consumer demand mostly linked to the unclear long-term software support. Of course, there are alternatives to Huawei in the market, but outside of Samsung, none is particularly strong across geographies. Those alternative brands, like Oppo, Xiaomi, OnePlus are in a place of growth and opportunity but not at a point of being able to walk away from Google if faced with a stronger Pixel competition.

A simple way to think about the role of Pixel today is that Made by Google might have started because Google needed Pixel, but now Android needs Pixel too.

The Fortnite World Cup and The E-Sports Tipping Point

For many of us not in the Gen Y or Gen Z category, it is hard to relate to the growth of E-sports and the commentary that E-sports will be as big if not bigger than other sports categories. As hard as it may seem to believe, this image will help cement why this is a likely tipping point for E-Sports players.

This post details the winner, 16-year-old Kyle Giersdorf who only started playing the game two years ago and currently spends 7-8 hours a day playing Fortnite. But given the size of the winnings, and the sheer size of Fornite’s player base is going to turn heads and continue to draw attention to E-sports tournaments.

The Global Scope
One of the main themes I’ve been writing about the last year is the true digital global platform that is emerging for video games and as a result, E-sports. I’ve noted the cross-platform trend of games to allow gamers everywhere in the world, on any hardware and software platform, to play together is a relatively new development. That theme is going to extend to every game, and there will be no world where platform exclusives make business or customer sense any longer.

The result of developers taking advantage of the cloud to bring their game to gamers on all platforms is the sheer size of their market opportunity. This market opportunity opens up financial scope not seen before in the video game world which, as the $3 million purses for the Fortnite World Cup proves, means even larger financial winnings.

As if Epic, the owner/developer of Fortnite, is not already making more money than they know what to do with, once a game reaches this scale they can make money off the entry fees, which 40-50m people participated in qualifying games, as well as the tickets to watch the final. All similar business models exist for E-Sports as do with other sports genres just at a completely different scale.

Fortnite will be used as the template for success for many developers and publishers, and that model is the way forward when it comes too gaming.

Race to a Billion
The only other sport that has the global scale of E-sports, currency, is soccer. The FIFA World Cup is a global event and draws ~1 billion viewers. I have no doubt that E-sports will catch the world cup in viewership at some point, and ultimately surpass it. But the same fundamentals that help the World Cup achieve its scale apply to E-Sports.

Soccer/Futbol is also the highest paying sport for its athletes. Another dynamic I think will come to E-Sports in the not too distant future. Again, the economics of global scale in the digital age is just simply larger than any physical sport can achieve and as the young generations get older and gaming becomes more a central part of the world, it is likely its popularity surpasses non-digital sports.

Fortnite’s success and the Fornite World Cup is the tipping point that in ten years, we will look back on as the thing that fueled E-sports to whatever size and scale it becomes.

The Sports Debate
The last thing I want to mention is the criticism I hear the most, which is E-sports should not be considered a sport. I’m sensitive to this, and I see the arguments on each side. While the physical side of E-sports, meaning the strength training, conditioning, physicality, athleticism, and other elements of physical sports are not a part of E-sports, one could argue the mental side is as challenging.

But the bigger point, despite how one feels about what should constitute as a sport, is why E-Sports enables a similar dynamic is the rarity of humans who can make up the small percentage to be the best at any given E-sports title.

Scarcity is valuable, and there will be this dynamic in E-Sports just as much as there is in physical sports. That alone justifies the similar dynamics and economics that will be enabled with the E-Sports genre.

T-Mobile, Sprint and Dish: It’s All about 5G

The US telco industry has seen its share of upheavals and evolutions over the last few years, but one of the biggest potential changes got kickstarted late last week when the US Dept. of Justice finally gave the green light to the long-awaited proposed $26.5B merger between T-Mobile and Sprint. Ironically, it took the introduction of Dish Network—a company best-known as a satellite TV provider, but one that has had its eye on being a more general-purpose service provider for some time now—to get the deal over the final hump of federal regulatory approval. (An antitrust lawsuit backed by several state attorneys general could still end up blocking the final merger, but the DoJ approval is widely seen as a strong argument for its completion.)

A tremendous amount of ink has already been spilt (or should I say, pixels rendered) discussing the whats and wherefores of the proposed merger, but in the end, it seems the most critical factor is 5G and what it will mean to the future of connectivity. Sure, there are arguments to be made about how our individual cellphone plan pricing may change or what services may or may not be offered, but those are all short-term issues. Strategically, it’s clear that the future of not just the mobile wireless industry, but connectivity in general, is increasingly tied to 5G.

In the near-term, of course, lots of people and companies are interested in building out 5G-capable networks, as well as devices that connect to them and services that can leverage them. That is indeed a huge task and something that’s going to take years to complete. Not surprisingly, some of the most compelling arguments for the merger—as well as for the new fourth 5G-capable network that Dish is now on the hook to complete—were around 5G-compatible spectrum, or frequency holdings, that each of the new entities would have if the deal was to go through.

Specifically, the new T-Mobile would gain a large chunk of Sprint’s mid-band, 2.5 GHz range frequencies (a subset of the larger group known as sub-6), which many have argued is an important middle ground for 5G. AT&T, Verizon and now T-Mobile have focused their early 5G efforts on millimeter wave frequencies (around 39 GHz for all three of them, although T-Mo also has some 28 GHz spectrum), which offers extremely fast speeds, but extremely short range, and essentially only works outside (or near an interior mounted, millimeter wave small cell access point). Late in the year, T-Mobile plans to add 600 MHz frequencies, which is on the bottom end of the sub-6 frequency range and offers significantly wider coverage—but at speeds that aren’t likely to be much faster (if even as fast) as some of fastest 4G LTE coverage now available. The Sprint frequencies will allow the “new” T-Mobile to also offer faster download speeds at 2.5 GHz, rounding out their 5G offering. (AT&T and Verizon have committed to bring sub-6 frequencies into their 5G offerings sometime in 2020.) Dish, the mobile carrier, for its part, will be able to leverage some existing spectrum it already owns in the 1.7-2.1 GHz range, as well as use some of the 800 MHz frequency that Sprint was forced to sell to Dish as part of the deal. All of it fits into the sub-6 category of spectrum, but the combination should allow Dish to create a 5G network with both good coverage and decent performance.

The one interesting twist on the mobile wireless side is that 5G heavily leverages existing 4G infrastructure investments, and in fact, 4G LTE service is getting better and faster as 5G is being deployed. As a result, the 5G buildout will, ironically, lengthen the usable lifetime of 4G LTE technology, as well as devices that use it—particularly those equipped with LTE Advanced capabilities and some of the spectrum sharing and compression technologies like 256 QAM, 4×4 MIMO (Multiple Input, Multiple Output), and carrier aggregation. Toss in technologies like Dynamic Spectrum Sharing (DSS), which in the world of 5G mobile infrastructure was pioneered by Ericsson and allows telcos with the appropriate equipment to share 4G and 5G spectrum, and the transition from 4G to 5G in mobile wireless should be very seamless (and almost invisible).

However, there’s more to 5G than mobile wireless, and that’s where things start to get really interesting. First, there are some very interesting options for building private 5G networks that companies could leverage across campus sites, or inside large manufacturing buildings, and essentially replace their WiFi network. While no one expects WiFi to completely go away, there are some very intriguing opportunities for network equipment makers and carriers to address this market because of the faster transfer speeds, higher levels of security, and the decrease in manageability costs that private 5G could provide versus WiFi.

There’s also the opportunity to replace broadband network connections and even supplement or replace WiFi in our homes as well. As in the business world, WiFi isn’t going to go away overnight in the consumer world (there are just too many WiFi devices that we already have in place), but it’s already possible to get 5G connections (heck, even some of the new 4G LTE Advanced networks—like AT&T’s confusingly labelled 5Ge) that are faster than a lot of home WiFi. Think of the potential convenience both at work and at home of not having to worry about two different types of wireless connections, but instead connecting everything through a single wireless broadband connection like 5G. In the future, it could be a very intriguing possibility.

Above and beyond the pure network “pipes” discussion, 5G also potentially enables a host of new services through technologies like network slicing. Essentially a form of virtualized or software-defined networks, network slicing will allow carriers to do things like provide a combination of different services to different companies or even individuals with a guaranteed quality of service, and much more. Innovative companies are likely to dream up interesting ways to package together existing services like streaming video and music, along with lots of other things that we haven’t even thought of yet to take advantage of the opportunities that network slicing could create.

The bottom line is that the transition to 5G opens up a world of interesting possibilities that go well beyond the level of competition for our current cellphone plans. In the short term, as we start to see the first real-world deployments and 5G-capable devices come to life, we’re bound to see some frustrations and challenges with the early implementations of the technology. Strategically and longer term, however, there’s no question that we’re on the cusp of an exciting new era. As a result, big changes, like the T-Mobile-Sprint merger and the launch of Dish as a new fourth US carrier, are likely only the beginning of some large, industry-shifting events that will be impacting not just the tech industry, but modern society, for some time to come.

Why Robot Umpires are Inevitable in Baseball’s Future

I have been a baseball fan since I was eight years old. Being born in the San Francisco Bay Area, the SF Giants were my team growing up, and I became a fan during the era of Willie Mays, Willie McCovey, and Orlando Cepeda.

I also got a chance to play baseball in Jr High and High school and was a catcher with a pretty good hitting average. I was too thin and short then to even consider playing baseball beyond high school and instead became an ardent baseball fan as well as a student of the game instead.

As a fan, I have watched games and viewed plays hundreds of times that I thought the umpire got wrong. For most of my life, we did not have instant replays where an umpire could review a play if one of the managers challenged their decision. But this process takes time and slows down the game.

As a technologist, I have watched baseball games more closely with an eye on how technology could be used to call balls and strikes, which is one of the most subjective actions done by an umpire during a game. It appears that each umpire has their own version of a strike zone, and as they say, “call it as they see it.” But with TV broadcasts superimposing a true strike zone graphic based on a batter’s stance, a TV viewer can actually see if the pitch is a strike or a ball.

While umpires traditionally have an accuracy rate of between 90-95% with their calls, the calls they do miss could clearly impact everything from a batter’s statistics to the final outcome of the game. Instant replay does help with field calls, but cannot be used today to change a ball or strike decision.

Knowing technology as I do, and the power and accuracy of things like sensors, radar, laser guidance systems, and imaging, it has been clear to me for some time that the technology is already here that could enable robot umpires to assist real umpires in calling truly accurate balls and strikes.

Earlier this month, the Independent Atlantic League became the first American professional baseball league to let a computer call balls and strikes at their All-Star game.

ESPN was there and wrote about how the plate umpire used technology to help him call balls and strikes:

“Plate umpire Brian deBrauwere wore an earpiece connected to an iPhone in his pocket and relayed the call upon receiving it from a TrackMan computer system that uses Doppler radar.

He crouched in his normal position behind the catcher and signaled balls and strikes.

“Until we can trust this system 100 percent, I still have to go back there with the intention of getting a pitch correct, because if the system fails, it doesn’t pick a pitch up, or if it registers a pitch that’s a foot-and-a-half off the plate as a strike, I have to be prepared to correct that,” deBrauwere said before the game.

It didn’t appear that deBrauwere had any delay receiving the calls at first, but players noticed a big difference.
“One time I already had caught the ball back from the catcher, and he signaled strike,” said pitcher Daryl Thompson, who didn’t realize the technology was being used until he disagreed with the call.

Infielder L.J. Mazzilli said a few times that hitters who struck out lingered an extra second or so in the batter’s box waiting on a called third strike.

“The future is crazy, but it’s cool to see the direction of baseball,” Mazzilli said.”

Up to now, Major League Baseball and especially the umpires have been opposed to using technology to call balls and strikes. From the Umpires view, they see this as eventually taking away the need for them altogether. But the way technology was used in The Atlantic League All-Star game fully engaged the umpire and really just gave him a new tool besides his eyes and subjective reasoning to make sure his calls were more accurate.

I believe there are two solid reasons why we will eventually have robot umpires call balls and strikes to make the game more accurate in terms of calling pitches correctly.

The first is one that technology can help keep baseball interesting and relevant to a younger generation who, at the moment, are not embracing baseball as wholeheartedly as their parents have in the past. And attendance at games has been down for the last three years.

“Of the league’s 30 teams, 18 are experiencing an attendance drop. And this is after a 2018 season in which attendance was down more than 3 million fans, an average of 1,237 per game.”

Millennials and Gen Zer’s are technology savvy and have many ways to entertain themselves these days. Their interests are spread thin, and baseball is just one of the things they may have an interest in. And given the tech-savviness of this younger generation, they most likely see how technology could enhance a game and make its outcome more accurate, which could increase their interest in baseball games and draw them to viewing them on TV or the Internet or going to a ballpark itself.

But the most important reason that I believe robot umpires are inevitable is legalized sports gambling. I personally am not a gambler but have watched this “industry” with great interest over the last three decades. I have become even more interested in sports gambling now that it has started to infiltrate eSports.

The Supreme Court recently ruled that a federal ban on sports wagering is unconstitutional.

Here is the conclusion of the majority opinion:

The legalization of sports gambling requires an important policy choice, but the choice is not ours to make. Congress can regulate sports gambling directly, but if it elects not to do so, each State is free to act on its own. Our job is to interpret the law Congress has enacted and decided whether it is consistent with the Constitution. PASPA is not. PASPA “regulate[s] state governments’ regulation” of their citizens. …. The Constitution gives Congress no such power. The judgment of the Third Circuit is reversed.

With this ruling, the courts have been given the choice of allowing sports gambling up to the states themselves. Here is a link to the States that already allow sports gambling and the current bills in the works from the States that do not allow sports betting yet, but proposed laws are working its way through their legislators.

Right after the Supreme Court ruling was announced, MLB released this statement.

Major League Baseball:

“Today’s decision by the Supreme Court will have profound effects on Major League Baseball. As each state considers whether to allow sports betting, we will continue to seek the proper protections for our sport, in partnership with other professional sports. Our most important priority is protecting the integrity of our games. We will continue to support legislation that promotes air-tight coordination and partnerships between the State, the casino operators, and the governing bodies in sports toward that goal.”

Once sports gambling becomes legal throughout most of the US, the pressure will be on MLB executives to integrate technology to call balls and strikes in both leagues. One thing that gamblers hate is subjective opinions that impact the potential outcome of a game. They want every call to be as accurate as possible so that their bets can be made based on quality data.

Of course, old-timers and baseball traditionalists will fight bringing robot umpires into the game. I have talked to quite a few of these folks, and their attitude is that without technology, the game is played as it has for over 100 years and robot umpires to them would be blasphemy.

However, Major League Baseball is not only a game but also a $10 billion industry and has to be willing to change with the times. They know that without some serious changes and adjustments, they could lose the next generation of younger fans. If that happens, their growth in the future will decline.

Sports gambling will also put pressure on MLB leadership as it expands throughout the US and professional and weekend sports gamblers want more ways to make the game more accurate to assist in their own analysis on their betting decisions.

I don’t know how long it will be before we get robot umpires calling balls and strikes, but the technology is here to do it today. Robot umpires are inevitable, and it will only be a matter of time before it happens.

Podcast: Intel Apple Modem Business Sale, Facebook, Alphabet and Amazon Earnings

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the quarterly results from Intel and the sale of their modem business to Apple, discussing Facebook earnings and the state of social media, and chatting about the earnings from Google parent company Alphabet and from Amazon.

Apple’s Plans and Needs for Intel’s Smartphone Modem Business

It’s official. Apple is buying Intel’s smartphone modem business. Note, Intel still has their modem business for products like PCs and tablets and the IP they were selling was specific to smartphone modems. Apple has been working with Intel for years now integrating their modems, so they had a very good sense of the underlying technology and if they could build on it over the long-haul.

This was relatively predictable. I started writing about this possibility in 2014 and then kept hinting at it even as Intel and Apple got closer in their joint modem efforts. It became very clear, quickly, that Apple would be Intel’s only smartphone modem customer and Intel was bleeding cash trying to keep up with Qualcomm and failing. It simply did not make sense for Intel to keep investing in smartphone modems, really modems overall, in my opinion. The only question I had, with regard to Apple buying this business, came after their license and chipset deal with Qualcomm. Apple appeared to be in a position to utilize Qualcomm technology, which is better than Intel’s, and it’s unclear if Apple can develop a better solution than Qualcomm. Their partnership with Qualcomm came off as a win-win for both companies.

In this article, I’m going to cover two points. The first being Apple’s long game here for building their own modems, including how 5G fits in. The second is what it means for Qualcomm.

Apple’s Long Game
The main point I want to make here is that Apple is yet to ship a product that includes a modem on their own designed custom A-series processors. Apple has always had to include a thin-modem (a dedicated chip to just modem) in all its products containing A-series processors like iPhone, iPad, and Apple Watch. There are great benefits to having a complete SOC (system-on-a-chip) that includes CPU, GPU, modem, and other things. The modem was the big missing piece, and Apple can get a number of benefits by building a total chip solution that includes the modem and fine-tuning their hardware and software for this new fully integrated chip.

Things like battery life, connectivity optimization, even security, and more are all benefits to tuning a system with a complete SoC. Qualcomm sells a complete solution to everyone, but Apple (they just sell Apple the thin-modem) and Qualcomm always highlights the benefits they get in overall system performance by having a completely integrated SoC. In fact, many of Apple’s competitors benefit from this same dynamic, and now that Apple has the capabilities to integrate their own modem designs onto their own SoCs they will get a great deal of system optimization and performance benefits. I’m not saying Apple’s modem will be faster than Qualcomm’s, only that they can tune the system better than they could before.

This is the reason why I’ve known for a long time that Apple’s end goal was to build their own modem. But you don’t just wake up one day and build a modem from scratch since baseband patents, and IP has the whole industry covered. With Intel’s IP, Apple is now able to forge a path to do their own modem designs with minimal license and IP fees to other companies. More on this in a bit.

Will Apple’s First Internal Modem be 5G?
No, I don’t think so. Primarily because Apple has other products that are low-hanging fruit for 4G/LTE. Particularly iPad and Apple Watch. Both these products do not need 5G in the next few years but would also benefit massively from a fully integrated Apple chipset with a modem integrated onto the SoC. Apple Watch, in particular, is my likely bet for the first product to see an Apple-designed modem, but it will be 4G/LTE, not 5G.

About Apple Watch, the products I’ve always figured were the primary reason Apple was working on designing their own modem is their wearable product line. Not just Apple Watch but AirPods and whatever else Apple has in the five-year pipeline. Remember I said Apple needed a separate chip for their modem connectivity. This separate chip takes up valuable space on the motherboard, and when you are talking about shrinking electronics to small sizes that we will wear, you need all the space on the motherboard you can get. Integrated the modem onto the SoC is essential for Apple’s wearable business and product roadmap. This is the ultimate endgame for a fully integrated Apple SoC.

Where Does this Leave Qualcomm?
Apple buying Intel’s smartphone modem business and IP is not necessarily all bad for Qualcomm. The deal Apple struck with Qualcomm includes a multi-year chipset license and a long term deal to access Qualcomm’s IP portfolio. The chipset deal is likely 5G modems for iPhone until at least 2022. And the license to IP is where Qualcomm may have some technology that Apple can use to build its own 5G modem.

Intel was nowhere close to building a 5G modem. In fact, I’m not even sure they would have got there anytime soon. The deal Apple made with Qualcomm is because, at a technical level, Qualcomm is the only 5G game in town for the next few years. Apple may not even be able to do a 5G modem on their own, and having a licensing relationship with Qualcomm can help them fill the gaps. Obviously, if what I’m suggesting pans out, Apple relationship with Qualcomm could be a long one, but it also won’t be as financially lucrative as Qualcomm would like.

During this 5G transition, Qualcomm is well-positioned and a strategic partner for Apple. Come to the 2023-2025 timeframe, 5G will be quite mature, and perhaps this is the proper timing for Apple with their own designed modem for iPhone. The bottom line here is Qualcomm still has some of the best wireless technology in the world, and Apple has a long-term license deal to that technology. If for whatever reason, Apple can’t do better than Qualcomm’s IP then I’d like to see them still leverage it somehow.

The last point, I’m watching Apple to make a move on an RF antenna provider like Qorvo or Skyworks. RF is critical to modem design, and Apple has worked both these companies before. Verticalizing on RF could help Apple even more efficiently optimize and tune their hardware, software, and services.

What’s Missing from The ‘Tech-Lash’: Consumers!

This week, the Justice Department announced that it is opening a broad antitrust probe into whether tech giants are unlawfully crowding out competition. This is another front in the “tech-lash” that has been building for a couple of years. The tech-lash includes concerns about a number of issues, including anti-competitive practices, widespread data breaches, illegal use of customer data, and other examples of poor practices and poor judgment. The tech industry, historically revered, and a symbol of U.S. economic leadership, is now increasingly vilified, and even blamed for increasing income inequality, housing prices, traffic congestion, and yet another poor season for the Mets. Telling friends at a cocktail party you work at Google or Facebook used to bring nods of admiration and some envy…now you’re sometimes put on the defensive.

But this so-called tech-lash still seems to be a largely internecine, intra-industry affair. The bulk of the criticism is coming from those in the game and in the know: regulators, media, analysts, advocacy organizations, and some companies looking to profit from the misdeeds and misfortunes of their competitors.

But what about consumers? It seems to me they are largely missing from the tech-lash. They are not leaving Facebook in droves, crying out for an alternative to Google search, or shopping less on Amazon. They’re not protesting in the streets, writing their Congressperson, or mounting e-mail campaigns. There aren’t emails coming from employers or being circulated among friend groups urging people to change their Facebook or Google privacy settings. In a year where Uber had numerous headline stories about poor executive/corporate behavior, some customers might have switched their loyalties over to LYFT, but the company still grew like a weed.

Ask ten (non-techy) friends whether they’ve changed any of their privacy settings in the past year and I’d bet eight say they haven’t. If you’re at a dinner party, talk might inevitably turn to our fraught political times, the crisis at the border, or how bad the traffic has become. But it’s unlikely anyone will spend much time opining on Facebook’s role in the 2016 election, complain about the duopoly in digital advertising, or express lament that Amazon has disrupted numerous sectors of retail.

Why is this? To begin with, most consumers like these services, despite the repercussions. They might think that the targeted advertising or fake news is annoying, but I’d wager a relatively small percentage believe that it has negatively affected their daily lives in a serious way. A second aspect, and this is certainly true among younger people, is that they understand the tradeoffs. They know that if Facebook, Google, and the like are going to be free, that they gotta make their money from somewhere.

And then, there’s sheer laziness and convenience. Here’s a personal example: it used to be that if I needed to buy new tennis balls, there were three sporting goods stores within three miles of my home in a close-in suburb of Boston. I could bike or take a quick drive over there, supporting a bricks and mortar retailer instead of ordering the new balls on-line. But as a result of Amazon, e-commerce, and the general problems affecting retail, all these stores have closed within the past three years. Now, my choices are to drive out to a strip mall seven miles away, fighting traffic, or to order the same product on-line, through Amazon or Tennis Warehouse, free shipping included – a process that takes about a minute.

I’ll add a final, perhaps less tangible factor. Many of these companies being examined by the DOJ are among the biggest success stories in American innovation in this generation. They’re all U.S.-based. They’re all companies whose products are used, in some shape or form, by a majority of the population. And many of us have profited from these companies’ rising stock prices, whether as direct investments or through index funds or retirement plans. So even though we might get obtrusive ads, read about the horrors of working in an Amazon warehouse, see Apple apps always show up at the top in search, or see a lot of junk stories on our social media news feeds, these companies and their products are intertwined in our daily lives. And most people would argue that these companies’ products and services have made their lives better, in some way.

Believe me, if regulators were going after Ticketmaster for outrageous ‘convenience’ fees, hotels for usurious ‘resort’ fees, or one of the big banks for predatory loan practices, there would be a lot more people jumping on the bandwagon.

Competitive Potential in Social Media

With the record-breaking, yet also only a financial slap on the wrist, of $5b Facebook-FTC settlement along with the still strong earnings report from Facebook, they look ever stronger as an entrenched incumbent. Facebook also disclosed the FTC is opening another probe into their company around their position as an anti-competitive market player. Facebook has the largest social network and seems to be moving toward having the largest ad-network of any company in history. From the outside looking in, it does feel like competition is ever harder in social media and the idea that Facebook will ever be challenged feels doubtful.

I’ve written quite a bit about what how some companies become a monopoly by default, and Facebook is undoubtedly a prime example. But that does not mean there are no significant cracks in the foundation, and the competitive moat has not been weakened. Several things over the past few weeks show how competition to Facebook and even Instagram are possible.

Snapchat and FaceApp
The two things I mentioned are what we saw with FaceApp and some new details from Snapchat’s earnings. I wrote about FaceApp last week, in a negative context due to the privacy issues it raised and the observation that many consumers didn’t even consider their privacy at risk. But there is a positive take away specifically to the idea of social media competition.

While we have no exact numbers of how many people downloaded the FaceApp app, it gained enough steam to be a viral sensation and a large enough group of people downloaded it simply to use a few minutes to try the age filter that it got national attention. Apparently, all you need to have as an initial hook is a fun camera filter to get potentially millions of people to at least try your app.

Similarly, Snapchat saw significant growth thanks to their transgender filter. While it is tricky to estimate how many of their new quarterly users joined just for the transgender filter, what is notable is Snapchat reported much more significant new user growth in the second quarter than was expected. They said they added 13m new users vs. 2m estimate. I’d wager to guess a lot of that new user growth was from people just wanting to try the transgender filter.

FaceApp proves a simple feature, like an interesting filter, can drive significant downloads, so it is not a leap to think the same dynamic played out for Snapchat with their transgender filter.

Competitive Potential
My takeaway from these two examples is further evidence of how fast an app can virally spread into the mainstream public. We have to look at this as the potential for competition to Facebook and Instagram. Granted, the challenge something like FaceApp or Snapchat have with their viral growth is how to keep those new users around. The stickiness of the solution is a big part of how someone would potentially compete with Facebook and Instagram, but the significant amount of viral growth indicates that gaining a large initial user base quickly is not a problem.

I mention this point because the key to any social media platform is an audience. If a platform has no users, it isn’t as interesting as one the has a larger social base. Therefore, the ability to quickly gain a critical mass seems to be an important dynamic of competing and it is one that both FaceApp and Snapchat show us can happen quickly if you have something that gets the attention of the mainstream. While competing is not easy, my overall point is recent events should give us confidence, it is at least possible.

The anti-trust probe and potential regulation around competitive tactics for Facebook should also help open up opportunities that may not have existed before. One interesting thing I observed the past few years was the lack of interest, so many VCs I work with had in any startups looking to compete with Facebook or Instagram. The common thought was why invest in something Facebook will just copy and kill if they feel it is threatening.

I already see renewed interest in by VCs in some more socially focused startups, which means the tide is potentially turning. However, entrepreneurs need to come around to the idea that competing in social is possible and that venture money may again be interested in helping them scale their company to compete.

That being said, and I’ll end on this point even though it needs more fleshing out in the future, but the future is niche. What I mean by that is Facebook’s scale and reach is a once in a generation type event. I’m not saying a single company won’t amass a 2-3 billion person user base, but I am confident it will be a long time until we ever see it again. Which means a potential TAM of hundreds of millions, rather than billions, is the reasonable user base goal. The good news is niches can be extremely profitable.

The Potential for Smart Contacts

For the first seven years of my life, I lived in the home of my German grandmother and two of her daughters who, along with my mother and father, guided me in this early stage of my childhood.

One of the reasons my two aunts lived with my grandmother was that she was totally blind and devoted almost three decades of their lives to caring for her. She developed an eye disease related to glaucoma in the early 1940s. So, when I was born, she had been blind for many years, and this was the only way I ever knew her.

Living with a blind grandmother has made me highly aware of the role the eye plays in our health, and every day, I am thankful for healthy eyesight. But if technologists have their way, the eye will soon become more than just a vehicle for delivering sight.

In April of 2016, Samsung received a patent for smart contact lenses with a built-in camera. Samsung began developing these smart contact lenses as a means to create a better-augmented reality experience and in the process, added the camera feature into its design.

As a diabetic, I have watched closely Google’s patent for smart contact lenses that in its first iteration, will serve as a way to check blood sugars for diabetics. Theraoptix is working on contact lenses for delivering eye medication for treatment of eye diseases.

But the one I am also watching very closely is the patent that Sony filed that includes a camera for taking pictures and video recording. Although Samsung has filed a similar patent, Sony’s experience in cameras and optical lenses and video recording could be the most important of the two.

A post by anohq.com shared more details on the Sony patent:

“Sony’s patent doesn’t mean we’ll be seeing them anytime soon. Nevertheless, Sony’s release of the lens will contain a picture-taking unit, a central controlling unit, the main unit along with an antenna, a storage area, and a piezoelectric sensor.

The last-mentioned unit above is responsible for monitoring the time on how long the eyelids have remained opened, and it will also detect the blink that was done to take a picture, as well as the blinks that were done subconsciously. This will allow the unit to distinguish between taking pictures and a normal blink.

As mentioned in Sony’s patent, the subconscious blink is between 0.2 to 0.4 seconds. Thus the patent states that if the blink exceeds more than 0.5 seconds, then it was done on purpose and will be considered an unusual blinking, therefore, gesturing the unit to capture the image. The antennae will supply the power to the lens wirelessly, source it from the smartphone, a smart tablet or a computer. The technology that was first discovered by Nicola Tesla will use either radio waves, electromagnetic induction or electromagnetic field resonance, and to top it off, the smart lens will sport an autofocus and zoom ability.

But before happy blinking customers can get their hands on this latest device, and for the intelligence agencies to ‘blink’ on everything in their sight, the technology is still to go through stringent tests. Then again, technology such as this is an interesting concept wrapped up in the scary, depending on how it will be used.”

Market Research Future has published a report called “Smart Contact Lenses Market, Forecast Up to 2023” that details much of what is going in the smart contact lenses world and its potential future.

There are two very interesting things tied to this concept of smart contact lenses worth mentioning. The first is the types of positive applications that it could deliver, such as its use to monitor blood sugars and to deliver eye medicine directly to an eye. And the convenience of being able to capture a picture or record a video at a moment’s notice without having to take out your smartphone or a dedicated camera to handle this task. One could see many positive applications for smart contact lenses, at least on the surface.

But the last line in this Anonhq piece is important. It say’s “Technology such as this is an interesting concept wrapped up in the scary, depending on how it will be used.”

It would be a great tool for the spy community but could be misused as well. China has surveillance cameras everywhere but imagine if every person who used smart lenses could be enticed to share any captured images or video with the government as part of their surveillance program. The surveillance angle alone makes this a scary technology. It could also be misused in other ways, so one ethical question those working on smart lenses needs to grapple with is not only the pro’s and cons but ways to make them un-intrusive, so they don’t violate anyone’s personal rights.

The eye does seem to be a new frontier for the tech industry to explore in terms of making the eye smarter. I personally wish they would invest in finding a cure to blindness, glaucoma and other eye diseases that impact a person’s sight and lifestyle, but making the eye smarter might contribute to finding these cures in the process. One can only hope this will be true!

Could VR Be a Better Short-Term Option than AR for Apple?

I lost count of the products that Apple has been rumored to have canceled, products that, of course, we never knew for sure were coming to market in the first place. The latest one that was covered by the DigiTimes earlier this month is the rumored pair of Augmented Reality (AR) Glasses.

The article seems to refer to project T288 which CNET covered extensively in an article in the Spring of 2018. The headset was supposed to deliver a dual AR and Virtual Reality (VR) experience with an 8K display for each eye. One year earlier, Bloomberg had written along the same lines about project T288 but only referring to having AR capabilities.

Going through Apple’s patents, hires and acquisition, it would be hard to believe AR and VR aren’t at least a hobby in Cupertino.

AR: The Long Game

Over the past couple of years, Apple’s CEO Tim Cook has been very vocal about the opportunity that augmented reality brings to consumers; an opportunity that he deems much bigger than VR mostly because of the pervasiveness of the use cases both among consumer and enterprises.

While the industry is still touting the success of Pokémon Go, the reality is that from a technology point of view, AR is still a long way away from delivering a truly immersive experience. Even early 2020 seems like an optimistic timeframe.

More importantly, I believe that for AR to truly take off, consumers’ acceptance needs to grow exponentially compared to where we are today. What is deemed acceptable in the outside world, or in any space that is shared with strangers is very different from what we feel comfortable doing in the safety of our own home. Just look at how consumers react to digital assistants in the home and outside the home, and the difference in uptake on something that compared to AR is quite simple.

Aiming for a set of glasses that can be comfortably worn for an extended period of times in different environment and light conditions clearly poses some challenges. From the correlation between power efficiency to weight, to the size of the field of view and an aesthetically pleasing design.

The other aspect that we have to worry about with a more widespread use of AR through glasses rather than phones is how laws and regulations will decide to deal with it. Will it be illegal to walk or drive using AR glasses? Will people be concerned about privacy? We have seen similar concerns with Google Glass and Spectacles, with public places banning their use to safeguard customers’ privacy.

It seems to me that even if Apple had a technologically viable solution there are still many question marks that might have driven management to pause. At the end of the day without law and regulation falling into place, Tim Cook’s vision of AR to “amplify human connections” would not come to fruition.

VR: The Content Opportunity

Tim Cook’s view on VR’s potential is not as grandiose as AR, to say the least. At an event hosted by the University of Oxford in 2017, in response to a student’s question about what technologies would prove transformative Mr. Cook said that while he sees AR play a role in education, in consumer, in entertainment and in sport as well as in every business, he thought that VR could do cool niche things, but it will not have a profound impact in the same way AR can.

We are now almost two years on from that interview and while I don’t think Mr. Cook’s views might have drastically changed, Apple, as a company, is in a very different place. We are a couple of months away from the launch of Apple’s TV+ channel and with that the debut of its own content productions.

If I look at the content that was presented during the launch of TV+ I cannot help but think that with such a focus on storytelling, VR might turn out to be very good at making the audience appreciate those stories more and turning the audience from a passive one to an active one. Maybe the connections that are amplified here are not among humans like in the case of AR but those between content and the viewer. Think, for instance, at the opportunity to become a character in the story or to view the story from a specific perspective.

Delivering content in VR could help with differentiation both from competitors, possibly between platform if Apple ever decided to bring Apple TV+ to Windows or Android. VR content could also add to the return on investment to the content subscription by providing extra content without necessarily requiring a brand new investment in a separate production.

Outside of entertainment, I see both VR and AR play a role depending on the use cases. Education and business might want to leverage both, and I expect Apple to want to make both iPhone and iPads the best companions to either experience. From a cost perspective the flexibility for a school to maybe have a handful of headsets that create an experience that brings in tens of iPad and iPhones might be much more appealing to school districts at least as they figure out how to justify the investment and quantify the return on investment.

Ultimately, I have no idea whether or not Apple canceled its rumored glasses or if there were glasses in the first place. What I am arguing here is that if Apple wanted to play in both AR and VR, it might be beneficial to leverage VR as the company ramps up its video content offering. While it might not be profound in augmenting interpersonal connections, it might be profound in augmenting the revenue opportunity.

Substack’s Funding, The Journalism Unbundling, The Newsletter Challenge

Last week it was reported that Substack closed a funding round of $15.7 million. I have been watching Substack with interest, and have a number of friends in journalism who have Substack newsletters. Substack is a platform that allows anyone to create and distribute a newsletter for free or for a fee. Having been running a site, with a newsletter as part of the service since 2012, I have some broader thoughts on the opportunity and the challenge.

Journalism Unbundling
Netscape co-founder Jim Barksdale famously coined this phrase, “there are “only two ways to make money in business: One is to bundle; the other is unbundle.” The continuous cycle of bundling and then unbundling and then bundling again, repeat, seems to have been accelerated thanks to the Internet. This idea of going direct to consumer is made easier thanks to the Internet. I spend a lot of my time studying D2C (direct to consumer) companies because it is one of the more fascinating trends of our time. The Internet has given anyone direct access to consumers without the need to aggregate, and in this case, it applies to news, analysis, and the individual voice.

Newsletters are becoming the de facto way journalists are going directly to the consumer, or perhaps direct to reader, with their style of reporting. It is interesting to subscribe to a number of journalists newsletters, and they are often much more interesting to read than what they write for a major newspaper or publisher. This is likely due to the lack of an editor, or a limitation of their style in a major news outlet, but more personality and unique commentary often come through in their newsletters than it does in their mainstream published articles.

Substack offers them a platform, the backend service, as well as a way to monetize should they choose. These are all things I had to invest in to build to start Tech.pinions in 2012, and thankfully I didn’t have to build it from scratch, but it did take a lot of work customizing and integrated many different components. This was a barrier for many that are now gone thanks to Substack.

When I started Tech.pinions in 2011, and then the subscription service in 2012, it was because of the extremely poor state of journalism at the time. At least, in my opinion, much of the writing as new upstart publishers/blogs were started was headline-driven and rarely covered the key parts of a story with any real insight or interesting commentary. My goal was simply to add more voices to the public landscape and get better quality technology content and analysis in front of more people. From my viewpoint, anything that makes it easier for better content to go mainstream. I’m all for this idea, and Substack has this potential.

That being said, the challenge of unbundling journalism will still exist, and this is the harder part as voices try to go independent.

The Newsletter Challenge
As more journalists, writers, and content producers looking to go direct to the reader, several things happen which add complexity to the market. First is there becomes quite a bit of competition. Competition for time, attention, share of wallet, etc. One of the biggest pain points I regularly discuss with readers of our Think.tank newsletter is how they often struggle to read everything we write on a weekly basis. On average, our newsletter open rates are 65% on the first day with that same newsletter reaching nearly 85% by the end of the week. My interpretation of that is that most of you read it on day one, but still, 40% or more take a few days to a week to get caught up. I note that email inboxes can be overloaded, often with higher priority emails fighting for your time every day.

Here, then, lies one of the bigger problems for newsletters going forward. Email is not the best RSS tool, but that is exactly what email newsletters shaping up to duplicate. Email is such a key part of everyone’s daily workflow that newsletters are often a distraction even if valued. Certainly, it is possible to sign up with a non-work based email and just check it later, but I’d wager most people use unified inboxes now and most are checking all their accounts frequently. Any newsletter distributed to email is competing against daily work based workflows and often timely communication from colleagues and bosses. Honestly, not the best place to distribute news, journalism, or analysis but it is the best option we have today.

Websites serve as great aggregators that allow people to get caught up on the news on their time. I already get eight newsletters to my email, and I struggle to read all of them each day. There is some value to bundling, in this case, that gets lost with newsletters, and I maintain that will be a challenge as more journalists looking to go on their own.

Of course, that may not be a goal. Maybe they just want a direct line to their readers and still plan to write for their newspaper and make a living that way. But for those who wish to go out on their own, and make their newsletter their primary business, my hunch is many of them may struggle.

This is one of those situations where I completely understand the desire and the opportunity, but I’m not sure we have solved the best way forward yet. While not perfect, something like the Athletic is something I think that makes the most sense. Largely because all my favorite sports writers have moved to the Athletic and I enjoy being able to curate my experience to just the sports teams I am passionate about. Yes, it is an aggregator, but it is also one that has the best voices, and passion-fueled content customized to the reader’s interest.

I’m arguing for a bundle yes, but a completely different kind of media bundle. It really is about bundling and unbundling, but the consumer convenience of media bundles is so high I struggle to see how we get around this for the mainstream consumer.

The Contradictory State of AI

For most major tech advancements, the more mature and better developed a technology gets, the easier it is to understand. Unfortunately, it seems the exact opposite is happening in the world of artificial intelligence, or AI. As machine learning, neural networks, hardware advancements, and software developments meant to drive AI forward all continue to evolve, the picture they’re painting is getting even more confusing.

At a basic level, it’s now much less clear as to what AI realistically can and cannot do, especially at the present moment. Yes, there’s a lot of great speculation about what AI-driven technologies will eventually be able to do, but there are several things that we were led to believe they could do now, which turn out to be a lot less “magical” than they first appear.

In the case of speech-based digital assistants, for example, there have been numerous stories written recently about how the perceived intelligence around personal assistants like Alexa and Google Assistant are really based more around things like prediction branches that have been human built after listening to thousands of hours of people’s personal recordings. In other words, people analyzed typical conversations, based on those recordings, determined the likely steps in the dialog, and then built sophisticated logic branches based on that analysis. While I can certainly appreciate that it represents some pretty respectable analysis and the type of percentage-based predictions that early iterations of machine learning are known to do, it’s a long way from any type of “intelligence” that actually understands what’s being said and responds appropriately. Plus, it clearly raises some serious questions about privacy that I believe have started to negatively impact the usage rates of some of these devices.

On top of that, recent research by IDC on real-world business applications of AI showed failure rates of up to 50% in some of the companies that have already deployed AI in their enterprises. While there are clearly a number of factors potentially at play, it’s not hard to see that some of the original promise of AI isn’t exactly living up to expectations.

Of course, a lot of this is due to the unmet expectations that are almost inevitably part of a technology that’s been hyped up to such an enormous degree. Early discussions around what AI could do implied a degree of sophistication and capability that was clearly beyond what was realistically possible at the time. However, there have been some very impressive implementations of AI that do seem to suggest a more general-purpose intelligence at work. The well-documented examples of systems like AlphaGo, which could beat even the best players in the world at the very sophisticated, multi-layer strategy necessary to win at the ancient Asia game called Go, for example, gave many the impression that AI advances had arrived in a legitimate way. In addition, just this week, Microsoft pledged $1Billion to a startup called OpenAI LP in an effort to work on creating better artificial general intelligence systems. That’s a strong statement about the perceived pace of advancements in these more general-purpose AI applications and not something that a company like Microsoft is going to take lightly.

The problem is, these seemingly contradictory forces, both against and for the more “magical” type of advances in artificial intelligence, leave many people—myself included—unclear as to what the current state of AI really is. Admittedly, I’m oversimplifying to a degree. There are an enormous range of AI-focused efforts and a huge number of variables that go into these efforts, so it’s not realistic to expect, much less find, a simple set of reasons for how or why some of the AI applications seem so successful and why some are so much less so (or, at the very least, a lot less “advanced” than they first appear). Still, it’s not easy to tell how successful many of the early AI efforts have been, nor how much skepticism we should apply to the promises being made.

Interestingly, the problem extends into the early hardware implementations of AI capabilities and the features they enable as well. For example, virtually all premium smartphones released over the last year or two have some level of dedicated AI silicon built into them for accelerating features like on-device face recognition, or other computational photography features that basically help make your pictures look better (such as adding bokeh effects from a single camera lens, etc.) The confusing part here is that the availability of these features is generally not dependent on whether your phone includes, for example, a Qualcomm Snapdragon 835 or later processor or Apple A11 or later series chip, but rather what version of Android or iOS you’re running. Phones that don’t have dedicated AI accelerators still offer the same functions (in the vast majority of cases) if they’re running newer versions of Android and iOS, but the tasks are handled by the CPU, GPU, or other component inside the phone’s SOC (system on chip). In theory, the tasks are handled slightly faster, slightly more power efficiently, or, in the case of images, with slightly better quality if you have dedicated AI acceleration hardware, but the differences are currently very small and, more importantly, subject to a great deal of variation based on software and software layer interactions. In other words, even phones without dedicated AI acceleration at the silicon level are still able to take advantage of these features.

This is due, primarily, to the extremely complicated layers of software necessary to write AI applications (or features). Not surprisingly, writing code for AI is very challenging for most people to do, so companies have developed several different types of software that abstract away from the hardware (that is, put more distance between the code that’s being written and the specific instructions executed by the silicon inside of devices). The most common layer for AI programmers to write is within what are called frameworks (e.g., TensorFlow, Caffe, Torch, Theano, etc.). Each of these frameworks provide different structures and sets of commands or functions to let you write the software you want to write. Frameworks, in turn, talk to operating systems and translate their commands for whatever hardware happens to be on the device. In theory, writing straight to the silicon (often called “the metal”) would be more efficient and wouldn’t lose any performance benefits in the various layers of translation that currently have to occur. However, very few people have the skills to write AI code straight to the metal. As a result, we currently have a complex development environment for AI applications, which makes it even harder to understand how advanced these applications really are.

Ultimately, there’s little doubt that AI is going to have an extremely profound influence on the way that we use virtually all of our current computing devices, as well as the even larger range of intelligent devices, from cars to home appliances and beyond, that are still to come. In the short term, however, it certainly seems that the advances we may have been expecting to appear soon, still have a way to go.