What if The T-Mobile, Sprint Merger Doesn’t Go Through?

on April 18, 2019
Reading Time: 4 minutes

Wednesday was quite the day for the wireless industry, with three front page Wall Street Journal articles: “Apple, Qualcomm End Legal Feud”; and “U.S. Threatens to Sink T-Mobile, Sprint Merger”, and feature piece on Nokia, with the Huawei situation as a backdrop. And within hours of the Apple-Qualcomm agreement, Intel announced it was exiting the 5G modem business. That’s a lot to unpack.

There are some quite interesting ironies here, given Qualcomm’s market power on one hand, and the U.S. government’s near irrational focus on wireless industry competitiveness on the other. So the tea leaves appear to be reading more negative on deal that would result in a viable competitor to AT&T and Verizon in wireless, and increase the chances of competition in the near monopolistic broadband industry. At the same time, government seems blasé that well-heeled Intel can’t effectively compete in the wireless modem business, facilitates a duopoly in network equipment by blocking Huawei from supplying its gear to U.S operators (and encouraging other governments to do the same), and allows further consolidation in the media business.

I’m not going to re-hash the arguments in favor of the T-Mobile/Sprint merger. But I do believe that the DOJ, and other government entities seeming to lean against the merger as currently structured have not fully thought through the implications of the deal not going through. President Obama was famous for always asking “and then what”, when confronted with difficult policy choices. As in, “we send troops to Syria…and then what?”  So, the DOJ blocks the T-Mobile/Sprint deal…and then what?

The first, not fully thought through implication is what happens to Sprint? It’s not often the acquiree in a deal mounts a campaign in favor. Sprint has not been able to effectively compete for years, and its principal owner has shown little appetite to up its investment. Will Sprint be left to slowly bleed out? How good is that for consumers?  Scenario two is that Sprint gets acquired, by somebody else, on the relative cheap. Cable would be the natural, but has shown little appetite to do so, historically. But for kicks, let’s say Comcast or Charter did buy Sprint. How beneficial would that be? Consumers aren’t exactly begging cable to offer them a wireless service, and they haven’t exactly flocked to their relatively me-too MVNO offerings. And how would this be good for broadband competition given cable’s current approach of “buy my Triple Play or I’ll charge you through the nose for broadband” pricing approach, and the fact that broadband pricing is among the highest in the OECD?

And what happens to T-Mobile? The picture is more optimistic, but the company is still weakened. It will have to reduce planned capex on 5G, will not be able to offer a competitive fixed wireless product in the absence of that 2.5 GHz band spectrum, and will have greater difficulty competing in the enterprise market. Oh, and it will have to go out and spend billions more in spectrum. Having to pay billions for more spectrum is, ultimately, an indirect tax on consumers.

Government has also not fully considered the DISH angle here. If the T-Mobile/Sprint merger were to go through, two positive-for-competition results are likely. First, DISH is in a stronger position to build out a wholesale wireless network, given the better prospects for a creative competitive offering (Cable? Google? Amazon?) in a three, rather than four player market. Second, a stronger New T-Mobile puts greater pressure on Verizon to do some sort of deal with DISH. In short, DISH’s spectrum is likelier to be put to productive use, and sooner, in a New T-Mobile scenario.

I’ll reiterate a concern that the government is looking at this merger through too narrow a lens. Yes, given the raft of consolidations over the past few years, the high level optics on going from four to three wireless competitors don’t look great. But the DOJ and other federal entities seem fixated on the HHI index for wireless, while not applying that same rigor to other sectors of the telecom/digital media/tech landscape, which are far more concentrated.

I’d suggest a different approach. First, the government needs to consider what is better for broadband competition, not just wireless competition. The focus on four-to-three in wireless seems myopic, given the state of competition in broadband. Yes, there are would-be competitors in broadband such as Starry, Google/Webpass, and Verizon 5G Home, but those deployments are moving at a relatively slow and piecemeal pace. Second, what is best for U.S. 5G leadership? Better to have three really good 5G networks, and more quickly, than two good 5G networks and some uncertainty as to the breadth, depth, and timing of T-Mobile and Sprint’s deployments if they were to remain standalone. Third, is to recognize that a big part of the 5G opportunity is in the enterprise, industrial, and IoT markets. In wireless today, that’s a relative duopoly, with AT&T and Sprint holding an outsized share compared to the consumer wireless space. New T-Mobile would have a much better chance of competing there than if they remained standalone wireless pure plays. It’s not just about deploying the infrastructure, it’s also about building a significant sales and marketing force to address the broader enterprise opportunity.

I am more optimistic than many other industry and financial analysts that the deal will go through, though there might be more onerous terms/concessions than initially considered. Those opposing this merger have brought up viable points about the potential effects of consolidation on competition and price. But they’ve not adequately considered what the longer term implications are if the deal does not go through.

Additional Notes on Qualcomm and Apple Settlement

on April 18, 2019
Reading Time: 4 minutes

Yesterday’s note got quite long, so I saved some talking points for a follow-up today. I’ve also had several discussions with investors focused on Apple and Qualcomm which helped me flesh out a few more thoughts as well.

Qualcomm and Apple is No Longer Qualcomm vs. Apple

on April 17, 2019
Reading Time: 6 minutes

What a day yesterday! The biggest news of the day, probably even the month, is that Apple and Qualcomm have settled their dispute over royalties and licensing and dropped all litigation. There is so much to unpack about this and the broad implications for the industry. But the day was not over! Hours after the news broke that Qualcomm and Apple settled, which ultimately led to a deal for Qualcomm to supply chips to Apple and Apple acquiring a license to Qualcomm’s patent portfolio, Intel announced they were excited the 5g smartphone modem business! The day felt like weeks when all was done, but each event was entirely linked. Understanding how we got here is the crucial first insight.

Why is Everybody Getting into Wireless Earbuds?

on April 17, 2019
Reading Time: 3 minutes

In just over a week we have heard rumors that both Amazon and Microsoft Surface might be bringing wireless earbuds to the market. This should be no surprise to anyone, but not for the reason that most highlight which is: wanting into some of Apple’s action with AirPods.

There is no question about Apple’s success with AirPods. Apple managed to get AirPods across gender, age, and even income level despite their price point not putting them in the “most affordable” category. The experience is described by many as magical. In a study, we, at Creative Strategies, conducted with Experian when AirPods first came out, customer satisfaction was the highest for a new product from Apple. 98% of AirPods owners said they were very satisfied or satisfied. Remarkably, 82% said they were very satisfied. By comparison, when the iPhone came out in 2007, it held a 92% customer satisfaction level, iPad in 2010 had 92%, and Apple Watch in 2015 had 97%.

Assuming Microsoft and Amazon are just after the revenue that a good set of wireless earbuds could generate is a little shortsighted.

Voice and Ears

Ambient computing and voice-first are certainly big drivers for both Microsoft and Amazon. As computing power is spread out across devices and digital assistants are helping to bridge our experience across them, voice has grown in importance as an interface. Many consumers are, however, less comfortable shouting commands across a room or speaking to technology outside the “safety” of their own home. As voice moves into the office, the need and desire to be able to speak quietly to an assistant and hear it back is even more evident.

Wireless earbuds that can be worn comfortably throughout the day allow us to build a better relationship with our assistants and, even more so, build our reliance. Interestingly, I would argue, this is where AirPods have not been as successful as Apple might have hoped for but certainly, through no fault of their own but more due to some limitations Siri has.

For both Alexa and Cortana, who do not have a smartphone they can call their own home, wireless earbuds are a great way to be with a user in a more direct and personal way rather than being relegated into an app. As I often say, this is not about consumers having only one assistant but making the assistant they use more often more intelligent and therefore creating a vicious circle: the more I use it, the more it gets better, the more I want to use it.

Eyes and Ears

Aside from voice and ambient computing, another trend that will benefit wireless earbuds is augmented reality. Starting with phones, consumers can build on the habit of wearing wireless earbuds while consuming information through their phones. Creating a habit and making wearing wireless earbuds natural rather than bearing the stigma that Bluetooth headsets had when they first came to market.

In a non-distant future, as we see more use cases focusing on displaying information across apps and we will move from phones to glasses, wireless earbuds will play an even more critical role in our augmented reality experience.

No Longer an Accessory

Whether they are critical to our relationship with a digital assistant or they help us immerse in an augmented reality experience, what is clear is that headsets overall are no longer an accessory but a device in their own right that for many vendors will grow into a platform.

Sensors already allowed headsets, whether buds, over the ear or on the ear to become smart to improve user experience, like when detecting if you are wearing them or not to determine if you want to pick up an incoming call from the phone or the headset. Plantronics and Jabra have had these kinds of features for years. Improvements in miniaturization added functionalities that turned some earbuds into wearables, or hearables devices, if you prefer. Devices that can track full workouts like the Bragi Dash. Considering the great work Microsoft had done with Band (not so much on the hardware but capabilities) they could even think beyond Cortana and leverage some of that know-how to deliver a fully-fledged hearable solution increasing stickiness and return on investment.

I would not be surprised if Apple considered the role AirPods or Power Beats Pro could play as a wearable device as an alternative to Apple Watch for those users who do not want to wear a watch but are interested in fitness tracking. I would also expect Samsung to consider a “sport” or “active” version of their Galaxy Buds to cater to a similar market.


AirPods have certainly become the benchmark for wireless earbuds in the same way iPhone has been the benchmark in the smartphone market. The “AirPods killer” is the earbuds’ version of the “iPhone killer” in the smartphone segment. Yet, I find that when it comes to wireless earbuds, there is much more dynamism in what brands can deliver and how they differentiate building on what their core competencies are, including artificial intelligence and machine learning, which will make it harder to compare like for like.

Betting on Disney

on April 16, 2019
Reading Time: 4 minutes

We have known for a while Disney planned on launching a new digital subscription plan around Disney’s strongest brands. Already have a digital subscription offering for ESPN (ESPN), and owning a majority stake in Hulu, Disney+ will round out Disney’s subscription offering and give them an extremely strong position as the way we pay for entertainment content changes. Before looking at what strategic advantages Disney has in this race, and the impact on the competitive market, it’s important to know that cord cutting (in the US) is accelerating at rates faster than previously assumed.

Samsung Galaxy Fold Unfolds the Future

on April 16, 2019
Reading Time: 4 minutes

I have seen the future. I have touched the future. I’ve experienced the future. And I love it.

How you ask? I’m one of the lucky few who has gotten to play for a few hours with the world’s first commercially available foldable phone, the Samsung Galaxy Fold (set for official release on April 26), and it’s amazing. The experience of looking at the normal-sized 4.6” front display on the device, and then unfolding it to unveil the same app in much larger form on the beautiful 7.3” screen is something I don’t think I will get tired of for some time.

But, it’s not perfect. First, at a price of nearly $2,000, it’s clearly not for everyone. This is the Porsche of smartphones, and not everyone can or will want to pay that much for a phone. Second, yes, at certain angles or in certain light, you can notice a crease in the middle of the large display when the phone is unfolded. In real-world use, however, I found that it completely disappears—it didn’t bother me in the least. Finally, yes, it is a bit chunky, especially compared to the sleek, single-screen devices to which many of us have become accustomed. However, it’s not uncomfortable to hold, and most importantly, it will still easily fit into a pants pocket (or nearly anywhere else you store your existing smartphone).

More importantly, the Galaxy Fold completely transforms how we can, and should, think about smartphones. Open up the phone and you’ll immediately recognize that this is an always-connected computer that you can carry in your pocket. Practically speaking, it lets you do all the digital activities we’ve grown attached to in an easier, faster, and profoundly more satisfying way.

Want to watch TV shows or movies on the go? You can’t get a better or more compelling mobile experience right now than what you’ll see on the Galaxy Fold. Looking for directions? Start your map search on the front screen of the device, then unfold it to display the entire area around your destination. It’s a revelation. Want to web surf, and chat, and check out social media at the same time? The Fold’s ability to simultaneously show three different applications in reasonably-sized windows—a feature Samsung calls Multi-Active Windows—matches the kind of experience that has required a large tablet or PC in the past.

I could go on, but I think you get the idea. The Galaxy Fold radically changes how we’re going to think about and use mobile devices, and frankly, makes most of our existing phones look a bit—no, a lot—old-fashioned. I realize it may sound somewhat hyperbolic, but I honestly haven’t been this excited about and fascinated with a tech device in a very long time…as in, since my original experience with a Sony Walkman (yes, that long ago). It’s the kind of device that makes you look at other existing products in a profoundly different way. Having said that, with a device this different, and this expensive, you’re going to want to try it out yourself to really see if it works for you.

While the Galaxy Fold is radically different from all other smartphones in some critical ways, it’s also important to remember, however, that it is, fundamentally, still an Android phone, with all that entails. For existing Android phone owners, this means that—other than a few, simple new ways Samsung has created to work with multiple apps on the large display—it works like your existing phone. App compatibility is supposed to be very good on the Fold—though there are some apps, like Netflix, that don’t currently support multitasking windows—but it’s still too early to tell for sure.

For iPhone owners who may be tempted to switch over to the dark side (and I’m guessing there could be a reasonable number of those with this new product), it does mean getting used to Android, finding a few new apps, and—if you can handle it—giving up the blue bubbles of iOS-only threads in your messaging apps. In exchange, however, you’ll get access to an experience that Apple isn’t likely to offer for several years. Plus, given the level of multi-platform application and services support that now exists, it’s nowhere near as big a concern as it used to be.

For everyone, you’ll get six cameras—including the same three-camera package of wide angle, telephoto, and ultrawide on the S10 series—two built-in batteries, and the ability to share your battery power with others. Inside the box, you also get a set of Samsung’s Galaxy Buds wireless earbuds that can also be charged with the power sharing feature.

There’s been an enormous amount of speculation and build-up around not just the Galaxy Fold, but the foldable smartphone category in general, with many naysayers suggesting they’re little more than a gimmicky fad. While on the on one hand, I can appreciate the skepticism—we’ve certainly seen more than our fair share of products that ended being a lot less useful than they initial sounded—I really don’t think that will be the case with Galaxy Fold.

In fact, looking back historically, I wouldn’t be surprised if the release of foldables is seen as being just about as important as the release of the iPhone. It’s that big of a deal. Of course, as with the iPhone, we will undoubtedly see several iterations over time that will make the current Galaxy Fold look old-fashioned itself. But for those of us living in the present and looking to the future, the revolutionary new Galaxy Fold offers a very compelling path forward.

A Twist on Foldable Smartphones

on April 15, 2019
Reading Time: 4 minutes

Last May, I attended the SID conference in L.A. This is the premier display conference in the world. At this show, I saw the first foldable displays that could be used in a smartphone and unfolded to become a small tablet. I wrote about it and laid out how companies like BOE, Visionex, and Samsung showed prototypes of this foldable form factor and how it could drive a new type of design in smartphones in the future.

Since then, Samsung has introduced its first foldable smartphone known as the Galaxy fold, and they will start to take orders for this device this week.

When I saw the BOE prototype at last years SID conference, I got to hold it and play with its screen and fold it at least five times. The good news is that true to form, when folded out it becomes a small tablet. But in the folded mode it was not a great smartphone, and it did not fit in my pocket well.

While this form factor has become the standard view of how foldable are seen at the moment, I think the jury is still out whether a foldable phone that becomes a small tablet has a future. I would argue if there is even a solid business case for a dual-purpose smartphone.

I have seen a lot of speculation that this form factor could be successful but no serious research that even hints to whether a foldable smartphone that doubles as a small tablet is even what people want.
But the first reviews of the Samsung Fold are just coming out, and the initial response to it is relatively favorable.

Here are a few of the early review comments-

Geoffrey Fowler- Washington Post:

“It’s going to take more time to understand whether the Fold is the future or just a Frankenphone. A smartphone and tablet in one could be convenient … or do both jobs less well. I suspect it has more potential as a replacement for a tablet than as a phone. To find out, I would need to operate the Fold one-handed on my morning commute, try to burn through emails at a coffee shop, and catch up on my Netflix queue on a flight.

Samsung still has a lot to figure out on this. Perhaps that’s why it’s focusing on a high-end — and more-forgiving — market for its first folding phone.

Design critics have said the Fold suffers from the problem of combining desires that sound reasonable together but end up ruining each other — like the Homer Simpson Car on a beloved episode of “The Simpsons.”

To me, the Fold’s usefulness as a one-handed phone seemed to take a back seat to its capabilities as a two-handed tablet. The question is: How many people need an Android tablet with them at all times? Samsung was right years ago about the trend toward larger-screen phones, which not that long ago we used to jokingly call “phablets.” The Fold combats the distressing trend of people needing handles, like those stick-on circular PopSockets, just to firmly grip their phones. If it catches on, the Fold could be the beginning of an era where big phones really are just tablets.

Perhaps the lesson from the first folding phone will be about the value of making devices smaller. Instead of doing origami on a tablet, imagine folding in half the phone you already own. “I don’t just want bigger screens, and I want being smart with the screens you have,” Milanesi said. Welcome back, flip phones.”

Harry McCracken- Fast Company:

“If folding-screen smartphones do take off, the Galaxy Fold will have its place in history. But will it be remembered as the category’s iPhone—an epoch-defining device that everyone else chases for a decade or more? Or its Palm Treo—a much shorter-term phenomenon? Or could it be IBM and BellSouth’s Simon—the 1994 device that kicked off the smartphone era without succeeding or even influencing anything that followed? We might not know until years have passed and additional iterations of the Fold have come and gone.

For now, even Samsung can’t say where this device will lead it and the smartphone industry. The company seems to be OK with that. CEO Koh told me that he’s optimistic about the prospects for devices like the Galaxy Fold going mainstream. But first, he says, “I want to see the response from the market.” So does everybody else.”

Their price point of $1900 to $2900 at first will be a deterrent from this being a big hit. But even if a couple of thousand buy them and give feedback on their likes and dislikes, we could get a read on its short and long term potential.

But I can’t help thinking that a smartphone that folds in half from the top down may be the foldable smartphone that ultimately gains the greater public interest. Motorola is rumored to be working on a model that folds in half and easily fits in your pocket, but when unfolded, you get a large screen smartphone.

And recently, Sharp developed an OLED foldable screen optimized for smartphones that can be folded in half. In fact, they even created their own prototype foldable phone to show off their new screen.


I have no doubt that the next phase in smartphone innovation will be to integrate some type of folding screen into their designs. While we will get better cameras, more memory, sharpers screens, etc., the current form factor is due for some major changes in form and design.

Indeed, my colleague at Creative Strategies, Carolina Milanesi @caro_milanesi had an important take on the Samsung Galaxy Fold.

“Comparing the #GalaxyFold to a traditional smartphone would miss the point. This is not just a flagship product, and this is the first of a new category which is not for every buyer out there & not just because of price. Status, fashion & tech all come together in the target buyer.”

However, at this stage, I think that the Samsung Fold and others that want to be a smartphone and tablet may not be the form factor that drives the highest demand in folding smartphones. I do believe this design will have some serious interest from users and could do well over time, especially if prices can get under $1000. On the other hand, a smartphone that could deliver perhaps a full 7.3-inch screen when unfolded from the top down, maybe the design that gets the broader attention in a folded smartphone market of the future.

The Business of Gaming Services

on April 15, 2019
Reading Time: 2 minutes

One of the digital markets that continues to surprise me is how large the gaming market is and its growth potential.

I am personally not a gamer for two reasons. In my 20’s, I was very competitive in traditional board and card games. When I say I was competitive, that is perhaps an understatement. There was one particular game I played with some new people while I was working in the St Louis area in which my own competitive streak was so off base that it actually scared me and I backed off from playing competitive games for a long time.

Podcast: Google Cloud Next, Qualcomm AI Day

on April 13, 2019
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the Google Cloud Next event, including the company’s announcements around its Anthos multi-cloud offering, their AI-focused additions to GSuite, and the software-based, FIDO-enabled privacy key coming to Android smartphones, and discussing the data center and smartphone-focused AI announcements from Qualcomm.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Competition in PC Processor Market Heats Up

on April 12, 2019
Reading Time: 4 minutes

IDC’s preliminary PC shipment results for the first quarter of the year showed total volumes down about 3% year over year, which was better than we forecasted for the quarter. Strong overall commercial shipments driven by the ongoing shift to Windows 10, especially in desktops, helped offset sluggish consumer volumes. One notable drag on shipments has been the ongoing shortage of Intel processors, especially on the lower end of the market. While Intel is closing in on a fix to its supply-side issues, we’ve seen a resurgent AMD grab share from the market leader with highly competitive new products. Moreover, Qualcomm continues to iterate on ARM-based Snapdragon processors targeting the PC market. The result is a PC processor market that’s more competitive then it has been in years.

AMD Grows Its Share
AMD’s market share in PCs has risen and fallen over the years based largely on the strength of its current generation of silicon and its ability to get PC vendors to leverage the company as a secondary source to Intel. Today’s AMD is firing on cylinders with competitive products up and down its product stack, and Intel’s well-documented issues around its shift to 10nm—and the resulting supply issues—gave the company the opening it needed. And PC vendors have responded by utilizing AMD chips in an ever-widening number of systems, including commercial-focused ones. This helped AMD increase its market share in 2018, and I expect it to grab even more share in 2019.

At CES, AMD leaned into the opportunity, announcing a range of new mobile chips including Ryzen 3000 processors for ultrathin and gaming notebooks, AMD Athlon 300 Series for mainstream notebooks, and AMD A-Series for Chromebooks. The company’s move to get into Chromebooks, which has already yielded new products from HP and Acer, is of notable importance. Years ago, the first Chromebooks shipped with ARM-based processors. Intel saw the opportunity around Chromebooks and wisely moved to capture that market with its low-end processors.

The result: As Chromebook volumes have surged in the United States, primarily in K-12 Education but more recently in consumer as well, Intel has captured those significant volumes. In 2017 Chrome OS represented 32% of U.S. commercial notebook shipments and in 2018 that grew to 36%. Its share of consumer notebook shipments grew from 11.7% to 12.4% during that same time. It’s notable that despite its supply-side challenges—which specifically hit the low end of its line—Intel made sure the supply of processors for Chromebooks never faltered. Now, as Google and its partners look to expand Chromebooks into other regions, finally dedicating the resources and marketing needed to drive the same type of success they’ve had in the U.S., the opportunity will become even larger. And AMD is there to challenge Intel for a piece of the action.

Qualcomm’s Snapdragon Efforts
I’ve written about Qualcomm’s foray into PCs in the past, and more recently I’ve been using Lenovo’s Snapdragon 850-based Yoga C630 laptop and, frankly, it’s amazing. I travel frequently, and for certain users like myself the ability to be on the move using my PC all day (and frequently days) between charges, is revelatory. I dropped a Verizon SIM into the unit, and I never have to worry about dealing with connecting to my smartphone or a janky WiFi connection point to get my email. Performance is good, not great, but more than adequate for what I need from a mobile system, and Windows compatibility issues have largely disappeared.

There are plenty of people, including many participating in this market, that are highly skeptical of Qualcomm’s ability to compete against Intel and AMD in the PC space. It will continue to be an uphill battle, for sure. Up until now, there’s been very limited PC vendor support. As a result, there aren’t many options in the market for buyers, and from a market-share perspective, Qualcomm hasn’t really registered, yet. In fact, I’d argue that the vast majority of current users are likely tech analysts like myself, tech journalists testing out the systems for review, and die-hard road warriors that have specifically looking for mobile-focused systems.

That may change later this year, however, as Qualcomm launches is next generation PC-focused chip, the Snapdragon 8cx. Qualcomm says this 7nm chip will drive notably better performance than its current chips, and I’m looking forward to testing out systems that utilize it. If I have one concern, it’s that Qualcomm and its partners will over-index on performance at the cost of the power efficiency, which is what makes these systems so special.

I do expect to see more PC vendor support for this chip, resulting in more choices in the market. At which point it will be up to Qualcomm, its hardware partners, and Microsoft to tell a more compelling marketing story. Part of that story will depend upon their ability to get carriers to make it easier and potentially less costly to utilize LTE (and eventually 5G) on these systems.

Intel Isn’t Standing Still
I’ve been in the tech industry a very long time, and I’ve seen Intel struggle in the past. It’s usually during these times when the company does its best work. The supply issues should abate soon; the company says its 10nm process is back on track, and it recently hired interim CEO Bob Swan to fill the role permanently. With regards to PCs, Intel seems to be focusing much of its efforts on the high-performance area, even hiring industry analysts and tech journalists to help tell its story. That said, of late the company also seems to be shifting at least some of its focus toward telling a stronger story about using Intel products in the data center. It’s a logical choice, and Intel would argue it can do both things well. However, it is hard not to see the data center push as a hedge against possible continued share erosion on the PC side of things.

Ultimately, as we often like to say in the tech industry, all this competition is good for the market and PC buyers. Pay close attention to the product mix when we reach the back-to-school season and keep an eye on new products as we head into the holiday season. Things are about to get even more interesting, and by this time next year, we could see a substantially different competitive mix in the market.

How Sony and Disney have influenced Apple

on April 10, 2019
Reading Time: 4 minutes

I have always been fascinated by Steve Jobs’ extreme interest in Sony and Disney. One of the first times I talked to Jobs in the early 1980s, he told me of his interest in Sony’s business as well as how Disney emphasized art and technology to build their company. We know that Jobs was especially interested in how Sony’s co-founder and CEO at the time, Akio Morita, thought about technology and software. With Disney, he admired their integration of art, entertainment and the role technology played in building Disney’s brand and business.

Will My Data Be the Ultimate Ecosystem Lock-in?

on April 10, 2019
Reading Time: 4 minutes

Last week I wrote about how it has never been easier to switch from an iPhone to a Samsung Galaxy S10+  and, you can imagine, my positioned generated quite a bit of dialogue from both camps. I had readers pointing out that Samsung has been ahead of Apple in several big tech trends over the years. I also had readers highlight how giving up on the iPhone would mean giving up on the whole Apple ecosystem. Depending on where you sit there are points to be made defending either side of the argument.

As I was reading through those comments, it became apparent to me that we are now at a point in the market where you have users who still see smartphones as their only device and users who view their smartphone play an essential role in a portfolio of devices on which they rely on in different ways throughout their day. The latter kind of user is usually an early adopter who sees the smartphone as the glue that keeps that portfolio of products together without being the most crucial piece of the puzzle at every moment in time. These are users who are happy to go for a run with their wearable or ask their smart speaker about their upcoming appointments without feeling the need to always reach for their smartphone first. Depending which group you belong to you might find it easier or harder to switch your phone.

The Shift from Apps to Services

The market has changed a lot from when smartphones first got in our pockets. After bringing us apps, smartphones took time away from other devices we had been using to become our one-stop shop for all our computing needs, from connecting to the internet, taking pictures, listening to music, watching videos, making payments and the list goes on. Over the past couple of years, although on average, we still spend most of our time on a smartphone we have also come to share more of our computing needs across other devices.

More importantly, many of those apps that made the smartphone the device we came to love more than any other device in our lives became full-fledged services across devices and often across ecosystems. This is what makes moving from one ecosystem to another much easier than it used to be, as most services offer a parity of features and experiences no matter what they run on. At the end of the day users are what they want and they will take them wherever they are. Ecosystem owners are well aware of this, which is why we have seen an increase in the number of “services” that they provide directly through hardware they often brand and control. I put services in quotation marks because what they deliver is not always a service in the sense of something you can subscribe to but more in the sense of offering a helping hand in an area, we, as users, need. Digital Assistants are an excellent example of a service that we do not subscribe to, but that is in effect a service aimed at providing assistance throughout our day. AI infused Microsoft Office, or Google G Suite are another excellent example of a service, in this case, a service, AI, within a service.

The importance of these services has become such that we carefully consider what we give up when we move from one ecosystem to another as some of these, due to the provider’s business model, are not available on other platforms, or they deliver a sub-par experience.

Addressing Pain Points Creates Stickiness

The more we use these services, the more information we give away. This is valuable information for the service provider because, even when they do not directly monetize from it, it helps them improve their services in such a way that draws us in even more creating a vicious circle.

What has been fascinating for me to watch, has been seeing ecosystem owners move beyond services that are directly related to their core business and start offering services that are addressing pain points in areas where technology has yet to play a significant role.

Amazon is doing it with brick and mortar retail, Google with Photos and Apple has been particularly aggressive in doing so in health and banking. When you enter areas that are so personal and sensitive as health and banking, and you can offer a service that puts the user first, you could enter a new level of loyalty that transcends hardware.

It used to be that Apple’s software and services added value to the hardware. You bought the hardware and that software and services added value to it which helped justify the financial investment you were making on a product that others often had on the market for less.

As Apple moves into more life-critical services like banking and health that they can monetize directly or indirectly, the value must be found in the service itself which will raise the bar for what Apple delivers. If we add the fact that these services will use some of the most personal data a user can generate, data they trust Apple to keep both private and secure, it is hard to think that a user will move gingerly away from the hardware that hosts such services.

Are we moving to an offering where iOS users no longer pick Apple hardware because of the hardware, but they subscribe to services despite the hardware not being the best option in the market? One might argue that this would not be the first time, that iTunes and iPods are an example of the priority users put on the service back when we did not have smartphones. What is different now is that we are dealing with much more important data than our music collection. While our experience might be best on Apple hardware, the very nature of the data linked to these services will drive our need always to want access to it no matter the hardware and ecosystem. After all who can control what hardware their doctor or bank is using?  If you agree with me that this is a possibility, then it is easy to see how for some services it is not a case of Apple making them available on other platforms to be successful. Instead, Apple must make it possible for users to access the data behind such services anytime anywhere for the user who will become loyal to Apple, not Apple’s hardware.

Google Embraces Multi-Cloud Strategy With Anthos

on April 9, 2019
Reading Time: 4 minutes

Let’s be honest. It’s not the easiest spot to be in. A fairly distant third place in terms of market share with some serious overhanging concerns regarding trust and privacy. Yet, that’s exactly where Google Cloud Platform (GCP) stands in relation to Amazon’s AWS and Microsoft’s Azure as they launch their 2019 Cloud Next event under the helm of Thomas Kurian, a new leader brought in to increase their business in this very competitive, yet extremely important market of cloud service providers (CSPs).

Long seen as a technology leader, Google faces the challenge of proving that they can be a good business partner as well, particularly in the enterprise market. Having had an opportunity to discuss the GCP strategy with Kurian, it’s clear he’s very focused on doing exactly that. Likely due in no small part to his long-time experience at Oracle, the new GCP President is bringing a number of basic, but essential “blocking and tackling” type of enhancements to the business. Included among them are easing contract terms, significantly building out their sales force and go-to-market efforts, simplifying pricing, and taking other steps that are designed to position the company as the kind of potential partner with whom even traditional businesses could be comfortable.

But it takes more than just basic business process improvements to make a splash in the rapidly evolving cloud computing market. And that’s exactly what the company did today with their new product announcements at Cloud Next, particularly in the red-hot area of multi-cloud with their new Anthos managed service offering.

Many businesses have been eager to have the ability to migrate their applications, particularly as concerns about lock-in with specific CSPs has served as a deterrent for further cloud adoption. By leveraging the Google-developed but open source Kubernetes container technology, along with a number of other enterprise-ready open source tools, Anthos offers a surprising flexible way to shift workloads from either AWS or Azure to GCP, and even lets companies move in the opposite direction if they so choose. Google also purchased a company called Velostrate last year that lets legacy applications wrapped in virtual machines get repackaged into containers that can also work with Anthos.

While Anthos isn’t specifically designed to be a migration tool–it’s a common platform that extends across on-premise and multi-clouds so that workloads can be run in a consistent manner across them—the ability to migrate is an important outcome of that consistency. Given the many differences in the PaaS (Platform as a Service) offerings from the various cloud providers, the seemingly simple step of migration actually requires a great deal of technological know-how and software developments to make work.

Thankfully, however, that is all hidden from software developers, as Google’s new tools are designed to let enterprise developers make the transition without any changes to their existing code. More importantly, from a psychological perspective, this freedom to move back and forth across platforms can help build trust in Google’s efforts, because it explicitly avoids (and even breaks) the lock-in issues that many companies have faced until now. In addition, it shows Google has a great deal of confidence in their own offerings, as this theoretically could be used to simply move away from GCP to other platforms. Instead, it highlights that Google now believes they can be extremely competitive on many different fronts. Plus, given the reality that GCP is likely a company’s second, or even third, cloud platform choice, the ability to use Anthos to move across platforms is a practical, and yet still strategic, advantage.

Another key benefit of the Anthos technology is the ability to see and manage applications across multiple cloud providers, as well as internally, on any private clouds via a single pane of glass. Once again, this capability gives more flexibility to organizations that are still working through their hybrid and multi-cloud strategies. In fact, in the near term, hybrid cloud environments will be the first to benefit because, while Google announced and demonstrated support for multiple cloud platforms, no specific dates were given as to when those capabilities will be generally available.

Even with these technology advancements, as well as the business process improvements, Google is still in a challenging market situation, particularly with regard to potential trust issues. While the company provided several compelling examples at their Cloud Next keynote of customers that are using the technology—and even talked about the trust several of those customers specifically said they had in Google protecting their data—the general perceptions are still a concern. Kurian acknowledged that as well but pointed out that the company is taking extraordinary steps to ensure the privacy and security of customers’ data, even to the point of logging any and all interactions that Google’s employees have with customer data (and requiring written permission to do so in the first place). Those kinds of policies are clearly important, but building trust often takes time, so Google will have to be patient there.

Thankfully, however, they are active participants in a market that demands fast technological advancements, and there is little doubt the company has the capabilities to deliver on that front. The cloud computing market continues to evolve at an extremely rapid pace, and Google’s large cloud competitors will likely react to the news with additional announcements of their own. Still, it’s clear that GCP is moving away from a purely technology-driven offering to one that’s increasingly cognizant of real-world customer needs, and that’s definitely a step in the right direction.

How Services Could Sour Apple

on April 9, 2019
Reading Time: 4 minutes

I have had a range of conversations with colleagues in the tech industry, and it has been interesting to hear the same observation brought up. There seems to be a broad sense the narrative around Apple is particularly negative at the moment.

Now, longtime Apple watchers will know this is nothing new. The past decade, in particular, has led to a flurry of narrative swings around Apple from overly bullish to dramatically negative. We are indeed in a negative cycle right now and understand why it is helpful. Beyond the why, I do think there are some questions around services we don’t have enough information on that until answered, are likely to continue to drive a negative cycle.

Intel Helps Drive Data Center Advancements

on April 8, 2019
Reading Time: 3 minutes

At last week’s Intel Data-Centric launch event, the company made a host of announcements focused on new products and technologies designed for the data center and the edge. Given that it’s Intel, no surprise that a large percentage of those product launches focused on CPUs designed for servers—specifically, the second generation of the company’s Xeon Scalable CPUs, formerly codenamed “Cascade Lake.” However, as I’ll get to in a bit, the largest long-term impact is likely to come from something else entirely.

Similar to the first-generation launch of Xeon Scalable back in July of 2017, Intel focused on a very wide range of specific applications, workloads, and industries with these second-generation parts, highlighting the very specialized demands now facing both cloud service providers (CSPs) and enterprise data centers. In fact, they have over 50 different SKUs of Xeon Scalable CPUs for those different markets. They even added a dedicated new line of CPUs specifically focused on telecom networks and the needs they have for NFV (network function virtualization) and other compute-intensive tasks that are expected to be a critical part of 5G networks.

A key new feature of these second-generation Xeon Scalable CPUs is the addition of a capability called DL Boost, which is specifically designed to speed up Deep Learning and other AI-focused workloads. As the company pointed out, most AI inferencing is still done on CPUs. Intel is hoping to maintain that lead through the addition of new vector neural network instructions (VNNI) to the chip, as well as additional software optimizations it’s doing in conjunction with popular AI frameworks such as TensorFlow, Caffe, PyTorch, etc.

Despite all the CPU focus, however, the sleeper hit of the entire event, in my mind, was the release of Optane DC Persistent Memory, which works in conjunction with (and only with) the new Xeon Scalable CPUs. Based on a technology that Intel has been working on for 10 years and talking publicly about for about 1 year, Optane DC (short for Data Center) Persistent Memory is essentially a low-cost compliment for traditional DRAM that allows companies to build servers with significantly more memory (and at a much lower cost) than would otherwise be possible. Available in 128, 256 and 512 GB modules (which fit into standard DDR4 DIMM slots), this new memory type adds an entirely new layer of storage and access hierarchy to existing server architectures by offering near DRAM-like speeds but with the larger capacities, lower costs, and persistence more similar to SSDs and other types of traditional storage.

In real-world terms, this means that memory-dependent large-scale datacenter applications, like AI, in-memory databases, content delivery networks (CDNs), large SAP Hana installations, and more, can see significant performance gains. In fact, at several different sessions with Intel customers who were early users of the technology, there was a tangible sense of excitement surrounding this new memory type and the benefits it provides. Quite a few discussed using Optane Persistent Memory with some of their toughest workloads and being pleasantly surprised with the outcome. As they pointed out, many of the most challenging AI workloads are more memory-starved than compute-starved, so opening up 6 TB of active memory in a two-socket server can make a very noticeable (and otherwise unattainable) impact on performance.

Optane Persistent Memory is also the first hardware-encrypted memory on the market, thanks to onboard intelligence Intel designed for the device. Intel provides two modes for the Persistent Memory to operate: the first, called Memory Mode, is a compatibility mode that lets all existing software run without any modification, and the second, called App Direct Mode, provides greater performance to applications that are adjusted to specifically work with the new memory type.

In addition to the Xeon Scalable and Optane announcements, Intel also discussed new intelligent Ethernet controllers designed for data center applications, and some of their first 10nm chips: the new Agilex line of FPGAs (Field Programmable Gate Arrays—essentially reprogrammable chips). Though they are typically only used for a limited set of applications, FPGAs actually have a great deal of potential as accelerators for AI and network-focused applications, among others, and it will be interesting to see how Intel continues to flesh out their wider array of non-CPU accelerators.

All told, it’s clear that Intel is now thinking about more comprehensive sets of solutions for data centers, CSPs, and other institutions with high-performance computing demands. It is a bit surprising that it took the company as long as it did to start telling these more all-encompassing stories, but there’s little doubt that it will be a key focus for them over the next several years. Yes, CPUs will continue to be important, but the reinvention of computing, memory, and storage architectures will undoubtedly yield some of the most interesting developments to come.

Podcast: Intel Data-Centric Event, Cloud-Based Gaming

on April 6, 2019
Reading Time: 1 minute

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing this week’s data center-focused Intel launch event, with a discussion on both their new products as well as what it says about the current state of data center, and chatting about new gaming research and the strong opportunity for multi-platform gaming services.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Software’s Evolution to Services

on April 4, 2019
Reading Time: 3 minutes

The services narrative is a hot one right now for a variety of reasons. But one thing worth pointing out, that piggybacks on my analysis from yesterday is how innovations in the data center and the underlying technologies powering them, are making a cloud-first world much more of a reality than ever before. These innovations make it possible for software’s evolution to services.

Scooters, Bikes, and ‘The Third Lane’

on April 4, 2019
Reading Time: 4 minutes

As a long-time telecom analyst, I’ve done numerous projects and written countless reports about the ‘last mile’ problem. In fact, one of the most promising use cases for 5G is using wireless to get from a fiber drop (or small cell) to the home, since FTTH has proven so cost-prohibitive. But Tim Bajarin’s Techpinions column on Monday, “How Scooters Are Rewriting our Views Of Personal Transportation”, got me to thinking about an equivalent problem that exists in how we get from A to B – and how electric scooters, bike/e-bike sharing, and the like can help with that ‘last mile’ challenge. Tim wrote that his Element folding scooter has helped with that ‘last mile’ in certain instances. I’d like to expand on that concept in this column. And introduce another metaphor: The Third Lane.

Two weeks ago, in a column on “Tech’s Unintended Consequences”, I wrote, as a prominent example, about how Uber, Lyft, and the TNC are creating enormous congestion problems in major cities. At the same time, investment in public transportation has waned, leading to a vicious cycle of higher fares and declining service. But these ‘personal transportation solutions’ (PTSs) could be part of a broader solution for the last mile, and in a way that enhances, rather than eviscerates, our current transportation infrastructure. Consider public transport, particularly in close-in suburbs (rather than cities, where more is walkable). Often, a bus or a commuter train drops you a couple of miles from your final destination. Which is where TNCs have proven valuable, but also clogging roads that were never built to handle that sort of volume. Or think about how, in Silicon Valley, there’s a train that connects the major towns (Palo Alto, Mountain View, etc.) but from there people have to fan out to offices from A to Z. Or in my home town of Boston, where an entire new neighborhood and series of office buildings (the Seaport) was built, employing tens of thousands of people, but requires a 1.5-2 mile walk, often in crappy weather, from the closest subway station. The TNCs (and private shuttle buses subsidized by companies) have come in to fill that gap. But they are not a viable long-term solution for so many one-of, one-person, short-haul trips.

This is where these ‘third option’ solutions such as bike sharing, e-bikes, and electric scooters can help fill an important gap. While they might not be optimal for a commute of more than 5 miles for most people (and the bike lane infrastructure might not exist for that entire route), they’re perfect for a couple of miles. The issue is, what part of the road can they use? They’re not permitted on sidewalks. And in many cities, there aren’t adequate (and protected) bike lanes. As a result, the percentage of people who use PTSs is limited to about 5%, mainly zealots and the fearless/intrepid.

So I’m going to borrow a concept coined by Starbucks founder Howard Schultz, who described his cafes as ‘The Third Place’. If PTSs are going to be a viable component of a multi-modal transportation solution, they need a safe and enjoyable passage from parking junctions and transport stops to their final destination. I call it ‘The Third Lane‘. That means reconfiguring or building a protected road lane or part of a sidewalk that radiates out to places people live, work, and play. We might not be able to build safe lanes everywhere for each person’s individual commute, but we can think about corridors that serve large clusters of people. In addition to being protected, these lanes need to be properly surfaced, since scooters and bikes can’t handle potholes, sewer ruts, and the like, in a way that cars can.

A critical piece of this is that municipalities have to be part of the overall planning. We can’t have Lime, Bird, etc. barging in, and then getting regulated after the fact, having befriended no one. Think about downtown Atlanta as an example. There’s a MARTA stop in Buckhead (with no parking), and then there’s probably 100,000 people employed within 2 miles of that station, at numerous office building clusters. What if they added a lane/path/portion of sidewalk along key corridors that people could use PTSs that they reserve in advance, to within a couple hundred meters of their final destination? And, as an alternative/supplement, a system of buses along dedicated lanes that run in a loop, sort of like the airport rental car shuttle?

This mentality exists in some of the denser, more forward-thinking cities. In places like Amsterdam and Copenhagen, all four modes of transportation are on relatively equal footing, from a planning perspective: car, public transport, bike/PTS, and pedestrian.

So, my message to the Limes and Birds of the world, with your dockless PTSs: use your goodwill, your data, your AI, and your tens of billions in valuation to work with local governments to create viable ‘third lanes’ along key corridors. Pick a couple of signature projects, where there are large numbers of workers who need to get a couple of miles from a transport or parking hub: South Station to the Seaport in Boston; Buckhead in Atlanta; Georgetown in Washington; from downtown Miami’s emerging multi-modal hub to the Brickell area, etc.

I realize it’s not easy, and that in many cities, finding the ‘real estate’ for that third lane is a big challenge. But it would be great to try this, in a greenfield sort of manner, in a few spots where it’s both viable and serves enough people where we could gather some good data. Sort of like some of the larger scale ‘smart city’ demo projects that have kicked off in places like Amsterdam and Toronto.

The Grass is the Greenest it Has Ever Been

on April 3, 2019
Reading Time: 5 minutes

I make no secret that I have been using an iPhone as my primary phone since 2007. As an analyst, I have to try different products in all kind of categories including phones, which means I have been using Android phones as well as what has come before from Windows Phone to PalmOS and every flavor of feature-phone before then. No matter what phone I tried, however, my trials end up with me going back to the iPhone. There are two main reasons for doing that: one is that I prefer the UX and the second is that the value in using devices across the Apple ecosystem is much more evident to me.

In February, I got the Samsung Galaxy 10+ to test, and I was expecting to follow a similar pattern to previous Samsung’s phones trials, which is that I love the design and the way they fit in my small hands, I like the camera, but ultimately I am overwhelmed by the UX. What ended up happening instead, is that I am still using the phone and I have seriously thought about making the switch. So here is what has changed and what is holding me back.

A Cloud World

Maybe it is the maturity of the smartphone market, which has led to app parity for the most part. Or perhaps it is the fact that even in a home like ours that has more Apple devices than any other brand we have happily let other ecosystems come in and take a slice of our time and money pie. Or maybe it is the combination of the two that helps consumers move from device to device more easily. Of course, there are hardware differences and some proprietary apps or features all across the various ecosystems and differences in how brands approach privacy, but the point, I think, is that with services and apps that go across devices thanks to the cloud moving across ecosystem is more comfortable than it has ever been.

Ultimately this is why I think Apple is doubling down on services. Yes, they will get an extra revenue source, but more importantly, they will create more stickiness to their ecosystem which will lead consumers to think twice before moving on. While using the Galaxy S10+, for instance, I was quite happy to move from CarPlay to Android Auto and have Google Assistant promptly bring up whatever song I wanted from Apple Music while I was driving, but unable to play my Playlists even when they are available in iCloud.

Two Things Are Holding Me Back

There were two things that I particularly missed when using the Galaxy S10+ and the Galaxy Watch Active, and both are not out of reach for Samsung.

The first thing I missed while using the Samsung Galaxy S10+ was iMessage. It is not about the green and blue bubble, nor it is about saving on text messages. What I missed was the ability to send a message from any device I was on as I usually do making iMessage a core part of how I communicate at work. Unfortunately, while Windows 10 has made some progress in supporting text messaging across Android the experience is just not as fluid. I hope that Samsung will spend some time creating a better-optimized app that goes across their phones, tablets and Windows PCs. I do wonder how many iPhone users will consider a move to Android if iMessage were available as a cross-platform app. While there are other apps that I use to talk to people iMessage is by far what I rely on every day.

The second thing I missed was the deep integration that comes from controlling all the pieces of the experience. The best example possibly being the vibration the Apple Watch gives out when you are using Apple Maps directions, and you should be taking a turn. I prefer Google Maps to Apple Maps, but as Google has given up on designing an Apple Watch app, I end up using Apple Maps when I drive. With the Galaxy Watch Active, which I see as the best alternative to Apple Watch in the market today, I missed that gentle tapping, a feature that might be hard to implement due to the combination of the watch running on Tizen and Samsung not controlling the experience on the Google Maps side.

As you can see, both my examples have little to do with Samsung’s hardware and a lot to do with the limitations Samsung is facing because they are not controlling all aspects of my experience.

More Confidence, not Technology Would Make Samsung’s Devices More Desirable

Aside from not being able to control the full experience, I also noticed that the options that are available on Samsung’s devices are just too many. Yes, there is such a thing as too many choices. Especially for consumers coming to Samsung from iOS, I think the available range of options can be overwhelming. In a way, consumers on iOS are used to Apple making decisions for them. When it comes to settings, users can, of course, change them but by and large, Apple is picking defaults that many users will never change mostly because of convenience or because they are not savvy enough to go and replace them. In most cases, Apple’s choice does not hinder user experience and simplifies things for the user.

It seems to me that Samsung has opted for the opposite and they believe there is value in giving users all the options there could be and let them figure it out. The new One UI helps by surfacing the most common use cases and settings, but you can easily find yourself three layers down in the options menu at any given turn. I am not sure if this broad set of options are the manifestation of a lack of confidence by Samsung, but I think that over the years their software implementation has improved and so is their understanding of what consumers want rather than what is technically possible so they could make those choices for their users. The camera UI is an excellent example of where Samsung has spent some time making decisions on default settings and leaving options to the more advanced users, but there is more room for simplifications in my view.

Making those decisions for consumers will also improve the cross-device experience that you will get from owning multiple Samsung devices. Samsung might be at a disadvantage because they are not controlling the underlying OS, but this disadvantage can turn to an advantage as consumers come to care more and more about an in-app experience and find best of breed products. In other words, if productivity is what I care about the most, I am likely to find a PC and phone combination that empowers me to be efficient, and this might mean that my two devices are not running on the same OS and are not part of the same ecosystem. The same can be said about gaming or media consumption. We have seen Samsung work with many partners to bring unique experiences to their products. I hope that such partnerships will extend to developers as well at the next SDC in the Fall. Pointing at the large installed base of devices that developers have access to is useful, but working with them to create better experiences is critical for developers and creates more stickiness for Samsung.

Gaming Content Ecosystem Drives More Usage

on April 2, 2019
Reading Time: 3 minutes

As I wrote a few weeks back, the gaming market is very large and extremely diverse. People spend extraordinary amounts of time and money playing games across an increasingly broad range of devices and platforms. From marathon gaming sessions on tricked out desktop PC gaming rigs, to snippets of game “snacking” on smartphones while standing in line or killing time in other situations, digital gaming has become a mainstream part of our culture.

In fact, gaming has become so popular that it’s created an entire ecosystem of gaming-related content and other activities, such as professional gaming eSports, that have proven to be enormously (though sometimes a bit surprisingly) popular as well. Game streaming video networks like Twitch here in the US, or Douyu over in China, draw enormous audiences measured in the millions on a daily basis for the live game streams they typically show. Similarly, Google’s YouTube and the China-based Youkou Tudou now host an enormous amount of recorded video content created by both professional and amateur gamers for others to watch.

In a recent survey by TECHnalysis Research on gaming trends among US and Chinese consumers, we discovered that US gamers who participated in the survey said they watched about 12 hours of gaming-related content a week between Twitch and YouTube, while Chinese gamers averaged 11 hours between Douyu and Youkou Tudou. They’re impressive numbers to be sure, but they’re downright staggering when you add them to the average 65 hours of gaming time (in the US) or 47 hours (in China) they said they did as well on a weekly basis. To be clear, these numbers were self-reported (and a series of checks were put into place to try and make them realistic), but they’re likely too high. Still, regardless of the exact numbers, it’s clear that gaming and related activities takes up an enormous amount of many people’s non-sleeping hours.

On top of that, a surprisingly large group of gamers said they created gaming content through their own live-streaming efforts and/or uploading of their own recorded games. As Figure 1 and 2 show below, this is particularly true in China, where PC-based gaming is even more popular than it is here in the US.

Fig. 1

Fig. 2
As you can see, creating original content is still done by a bit less than half of the US respondents (the numbers add up to more than 100% because some people both live-streamed and uploaded content), whereas only 40% haven’t tried it in China yet. What’s particularly interesting, though, is that the practice is remarkably strong in both countries up through the 35-44 age group—it’s not just millennials that are doing it.

Similarly, it’s not just millennials who are participating in gaming competitions and watching professional gaming via eSports TV shows and events. Driven in part by the popularity both of gaming and gaming content, the eSports phenomenon has taken many by surprise. As the survey results indicate, however, it’s also very popular with many gamers, with around 65% of US gamers and about 82% of Chinese gamers saying they had watched or even attended a professional gaming event. Watching those tournaments also clearly inspired many gamers to participate in their own gaming competitions, just as watching other professional sports often encourages participation in them. An impressive 66% and 65% of US and Chinese participants, respectively, said they had participated in either a PC, smartphone, or game console-based competition. Again, the participation rates stay fairly consistent through the 45-54 age group in the US and the 35-44 demographic in China.

Even more importantly, all this consumption and creation of gaming-related content is inspiring gamers to spend even more time and money on their gaming habits. In a classic virtuous circle type of model, the interest in gaming drives interest in gaming-related content, which in turn drives yet more interest in gaming, and on and on.

Obviously, there are limits to how far the gaming phenomena can extend and there are some people who are already facing challenges with balancing their game time with the rest of their lives. Still, it’s apparent that gamers feel very passionately about their hobby and it’s equally apparent that it represents a lucrative opportunity for companies who can tap into that passion. Given the level of engagement that many people have with gaming, and the growing ecosystem that now surrounds it, that opportunity is sure to last for many years to come.

(You can download highlights of the TECHnalysis Research Multi-Device Gaming Report here.)

How Scooters are Rewriting our Views of Personal Transportation

on April 1, 2019
Reading Time: 5 minutes

Now long after the original Segway was launched, I had the privilege of being able to test one. I had met its creator, Dean Kaman, at a dinner in San Jose a year before the Segway launch. Others who had actually been told about it like Steve Jobs and noted venture capitalist John Doerr, who went on record saying they felt this product by Mr. Kaman would be a game changer.

While the Segway did make a splash and did get interested as a short-range mode of transportation, it never took off. In fact, it was even banned in some cities for use on sidewalks as it was a nuisance to pedestrians and deemed unwelcome in many other cities who refused to let people use them on city streets.

But the one thing that the Segway did is to introduce what is called last-mile transportation link. And it has birthed the current “ last mile” mobile electric vehicle of the moment in the scooters that populate the streets and roads of many cities today.

The chart below shows the areas of the world where scooters are taking off.

Here in the US, they are also populating large cities, but in many, they have become controversial due to three key factors.

First, without any regulation, scooters, like the ones from Bird and Lime, started showing up in huge numbers in large cities and were more a nuisance than a welcomed vehicle for last mile journeys. Many cities banned them outright in order to develop rules and regulations guiding their use in these cities as well as make companies bid for the chance to place their scooters in these towns. Most cities now have solid regulation in place to control how many can be placed in a city, as well as having the proper insurances and guarantees that they are picked up at night to keep the sidewalks from being clogged and scooter’s under a semblance of control.

Second, is the fact that they can be dangerous. These scooters, while not speed demons, do travel at around 15 miles an hour and if you fall off at that speed, you could be injured. The chart below lays out the most common injuries.

CNET spoke to Trauma Centers in multiple cities to get feedback on the kind of scooter-related injuries they were seeing-

“CNET spoke to trauma centers in Denver, San Diego, San Francisco, and Austin. All reported an uptick in injuries from scooter accidents. It’s been just a few months since the vehicles were unleashed onto city streets, so emergency room doctors say they’re only beginning to collect data.

“We see some scary injuries,” said Dr. Chris Colwell, chief of emergency medicine for Zuckerberg San Francisco General Hospital and Trauma Center. “There’s still a lack of recognition of how serious this can be.”

Colwell said his emergency room is logging about 10 injuries a week. They range from extensive bruising to severe head trauma. Given the hills in San Francisco, he also sees a lot of road rash. “We saw a guy who fell over on his back this week,” Colwell said. “He ended up going through so many layers of skin, and we had to essentially put him to sleep to clean out the gravel embedded in his back.”

Bloomberg recently reviewed a study from the JAMA Network Open and found the following-

“The vast majority of the injured were riders as opposed to pedestrians. They averaged around 34 years of age and were 58 percent male. The study revealed a general lack of operator adherence to traffic laws or warnings by the scooter companies themselves, according to an article published Friday in JAMA Network Open. Though scooters can reach 15 mph, less than 5 percent of riders were reported to have been wearing helmets.
About 40 percent of patients had head injuries, and almost 32 percent suffered broken bones. The study said a significant subset of the injuries occurred in patients younger than 18.
The researchers don’t try to compare your chances of getting killed on (or by) a scooter versus a car, but rather the physical damage being wrought. And how did these 249 California riders get hurt, exactly? More than 80 percent just fell off, according to the study. Eleven percent hit something.”

This is a significant issue and one that will plague scooters for years unless riders do more to protect themselves, such as wear helmets and even knee and elbow pads when riding.

The third area is the business model. The Bird and Lime scooters cost around $550 each and have a life span if only around 1-2 months at best.

ExtremeTech talked with Quartz Report’s Ali Griswold, who did a study on one of the scooter programs in Louisville, KY-

“Quartz reporter Ali Griswold performed an analysis on revenue-per-scooter using open data sets provided by Louisville, KY. The question of how much revenue companies earn per scooter is an interesting one.

“Griswold’s analysis was made possible by the fact that the initial Louisville KY data sets included a unique identifier for each scooter, allowing her to track how long the vehicles persisted in the city. Later data dumps have removed this modifier, likely to prevent the kind of analysis she performed.

What she found is that the average scooter lived 28 days, with a median lifespan of 23 days. Focusing only on the oldest vehicles in the data set improved this slightly, to 32 and 28 days, respectively. Using the oldest vehicles for a baseline and excluding December data (her data set ran from August – December), the median vehicle took 70 trips over 85 miles.

When you run through the various costs and revenue, there’s just no way these companies are doing anything but losing huge amounts. The average revenue generated per scooter in Louisville, at least, comes out to between $65 – $75. Data available elsewhere online suggests each scooter costs Bird $551. The company wants to get that down to $360, but even so, it’s losing $285 – $295 per scooter deployed in the Louisville area.”

I have read other stories on the economics of the on-demand scooter business, and they all question the business models and if they can ever be profitable and sustainable.

There is another interesting economic model around scooters developing, and that is one in which a person buys a scooter and uses it as needed for last mile transportation.

Although I have used a Lime Scooter once or twice for short distance travel, last fall I tested the Element Folding Electric Scooter from Jetson.

This particular model costs $299, weighs 18.74 lbs and has a range of up to 10 miles. They have a higher end longer range and more durable model called the Quest, that sells for $539.00, Weights 28.4 lbs and has a max range of 18 miles.

If you were going to use this a lot, then the Quest would be the better purchase. But in my case, the Element meets my needs as I mostly use it to go to the local grocery store, and around the neighborhood, although I have packed it in my trunk and taken it with me to Downtown San Jose and used it there to travel short distances to meetings.

The lightweight of the Element makes it easy to put in my car’s trunk, or in the bottom of the grocery cart at the store. Portability is very important to me and having something like this in my car for last mile journeys has been quite useful.

Both Jetson Scooters get high ratings, and so far the Element has held up well over the six months I have been using it. Of course, many other companies, such as Segway, Xiaomi, Razor, Gotrax, Gilon and others see the market for personal scooter usage and are ramping up new models for our market.

I am pretty health conscience as well as balance challenged so I won’t use the scooter without a helmet and at least elbow pads. But so far I have had no major issue with the Jetson Element and continue to use it as needed.

While owning one’s own scooter is not for everyone and the on-demand model that Bird, Lime, and others are using has merit, especially in big cities where getting to a location fast and easily is called for, I have a sense that the ownership model has some serious legs and could become one of the more interesting way’s scooters are used in the future.

Podcast: Apple Services Event

on March 30, 2019
Reading Time: 1 minute

This week’s Tech.pinions podcast features Tim Bajarin, Carolina Milanesi and Bob O’Donnell analyzing this week’s services-focused event held by Apple, including discussions around the new Apple Card credit card service, the TV+ streaming TV and content aggregation application, the News+ magazine service, and the Apple Arcade gaming service.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Cloud-Streamed Gaming: More Questions Than Answers

on March 29, 2019
Reading Time: 4 minutes

Gaming is an ultra-hot topic in the world of tech right now, made clear by recent big announcements from both Google (Stadia) and Apple (Apple Arcade). Gaming has been one of the bright spots for a challenged PC market, with PC gamers willing to refresh hardware more often than other consumers and to spend more money every time they do it. Gamers spend money in online gaming marketplaces, in a still-thriving console gaming market, and—of course—increasingly on mobile platforms such as Android and iOS. All told, gaming drives huge profits across a wide range of companies, and that’s before we start adding up the dollars associated with eSports. One issue that’s becoming increasingly clear, however, is that in the near term the gaming market is likely to experience some growing pains as new technologies become available and long-standing business models face disruption.

Cloud Gaming, or Cloud-Streamed Gaming?
At IDC we’re about to embark on a very ambitious gaming survey and forecast project. We plan to run surveys in five countries (U.S., China, Brazil, Germany, and Russia), capturing responses from more than 12,000 respondents, including hardcore and casual gamers as well as dabblers and nongamers. Our goal: to better understand consumer sentiment around everything from hardware and brand loyalty to device refresh rates to spending on software, services, and accessories. We’re also devoting an entire section on cloud-streamed gaming. One of the most challenging things to do in any consumer survey is to ask respondents about technologies or services that are not yet widely shipping (or understood). As we’ve been building our survey, we’ve had some spirited internal debates about the nature of cloud gaming versus cloud-streamed gaming and more.

What’s the distinction? As my colleague Lewis Ward notes, most games today have some cloud element to them. Certainly, every multiplayer console, mobile, or PC game that lets us play against friends, family, and strangers all over the world falls into this camp. Apple made a point of saying that Apple Arcade will let you play offline, a dig at Google’s online-only Stadia. However, in the next breath, Apple points out that you can jump from iPhone to iPad, Mac, and Apple TV, clearly utilizing a cloud component. And I’m guessing some of those new iOS-only games will have multiplayer modes.

So, essentially, we already live in a cloud gaming world. Stadia, however, is clearly cloud-streamed gaming, as subscribers will be able to play games across a wide range of devices (as long as they’re online and support a Chrome browser). As Ben Bajarin notes, other cloud-streamed services from companies such as Sony and nVidia are even more cross-platform friendly. We’re still waiting to see what other big players, such as Microsoft, will offer. Bottom line, however, is that the experience when it comes to a cloud-streaming gaming service will be highly depend upon that network connection.

And it’s here where Google left us with more questions than answers, at least for now. The company didn’t talk in its announcement about specific home broadband requirements or address network latency, which is what will dictate the quality of the experience on Stadia. It also didn’t talk about pricing tiers (or pricing at all). Will games streamed at 4K cost more than 1080P games? Will everyone get 60 frames per second?

At least one promise of cloud-streamed gaming is that with the CPU and GPU in the cloud, gamers can meet on an even playing field, regardless of the device upon which they are playing. Moreover, it means players can play games across their devices, instead of having games they can only play on PC, on console, or on their mobile device. In theory, it’s a very compelling offering, but as with most thing in technology, the real value won’t be apparent until we see the execution.

Impact on PC and Console Gaming Markets
If cloud-streamed gaming delivers on its promise (or perhaps the right question is not if, but when) and it removes the local computing power from the gaming equation, what will the impact be on the hardware vendors that sell high-end gaming PCs, CPUs, GPUS, and more? I know at least one prominent PC gaming executive thinks cloud-streamed gaming will never impact his business and he argues that some people will always want a high-powered gaming rig to play on. To date, this has certainly been true. Hardcore gamers will spend big bucks to gain frame rate advantages that can mean the difference between life and death in a game. Frankly, that element of PC gaming has always bothered me a bit, as it effectively means that those with deeper pockets enjoy often significant in-game advantages.

However, it’s this desire to have the best that makes the PC gaming hardware market so appealing to vendors. Moreover, it’s this faster refresh cadence that also enables game developers to embrace next-generation technologies before any other consumer categories (ray tracing-capable graphics cards being a good current example). The PC gaming hardware market thrives in this cycle, so what happens if cloud-streamed gaming disrupts this? Where does that booming market go if in the future a person using a three-year-old $300 Chromebook has the same experience and gaming capabilities as somebody on a brand new $4,000 gaming rig?

In the end, that is the biggest question nobody can answer right now. Is cloud-streamed gaming disruptive, additive, or something else? The cost of access, the available games, the buy-in from eSports athletes, and the quality of experience will all play a role in the final answer. Regardless, it’s going to be very interesting watching it all unfold over the next few years.