How Services Could Sour Apple

I have had a range of conversations with colleagues in the tech industry, and it has been interesting to hear the same observation brought up. There seems to be a broad sense the narrative around Apple is particularly negative at the moment.

Now, longtime Apple watchers will know this is nothing new. The past decade, in particular, has led to a flurry of narrative swings around Apple from overly bullish to dramatically negative. We are indeed in a negative cycle right now and understand why it is helpful. Beyond the why, I do think there are some questions around services we don’t have enough information on that until answered, are likely to continue to drive a negative cycle.

What’s Driving Negative Sentiment
The easiest part to understand is why sentiment is negative at the moment. It rests squarely on the bias that Apple is a hardware company and not much else. The slowing of iPhone sales and the clear impact that is having to Apple’s bottom line only fuels this view, and we already see the doomsayers emerge from the shadows to point out how they have been right all along.

It’s been no secret that the vast majority of Apple’s revenues come from hardware. However, I’ve long argued that viewing Apple as hardware the only lens is the incorrect way to understand the company. Apple makes great hardware, yes, but if Apple ran the same operating system as the competition (Android) the iPhone business would not be the size, it is today. Meaning, iOS is a lot more valuable than most people realize to the whole Apple picture.

When people ask me what kind of a company Apple is I explain they aren’t really a tech company, nor are they hardware or a software company. At its core, Apple is a customer experience company. Apple’s focus is the customer and providing the best experience possible no matter the category of product. This point cannot be missed because it has been true with how they approach hardware and software. But, for me, there are still questions about Apple’s execution on being a customer experience focused company when it comes to services.

On stage at Apple’s March 25th media event, CEO Tim Cook made an effort to point out that Apple makes world-class hardware, world-class software (both true) and a growing collection of world-class services. It is the point about world-class services that I think is the big question mark.

The Services Risk
While it is true many of Apple’s exclusive apps, like iMessage, have a foundation of a service attached to it, most people don’t recognize or perceive it as a service. Apple’s App Store, iTunes Store, etc., are services but whether or not anyone attaches their broader understanding of a service is a question.

The risk, in my opinion, of Apple’s evolving emphasis as a company on services, is the potential to not live up to the bar the company has when it comes to hardware, and software. From many research studies, we have done around services it is clear to me there are different expectations in the mind of a consumer when they pay a monthly or annual fee for a service than the expectations they have when they buy hardware or software.

In Apple’s case, and around their hardware, in particular, customers can easily justify the cost in their case and feel it is ‘worth it.’ The big question I have is whether or not Apple can convince customers their services are ‘worth it’ in light of a much more competitive services environment Apple will face than the competition they face in hardware.

Consumers are used to a certain bar of quality with Apple hardware, and I do wonder what will happen if that bar is not met in services. Highlighting the disparity between hardware and services takes shape when we look at the research we did on HomePod owners. Note the visual below on overall customer satisfaction of HomePod (largely hardware related) vs. Siri satisfaction which is services related.

Apple’s unique approach to hardware engineering means they will likely always score high customer satisfaction on hardware, but I want to see satisfaction with their services go up. One of the main things that get critiqued about Apple’s services at large is consistency. This is true of Siri, even iMessages, etc., that the service consistently does what you expect it to do. With Siri, HomePod owners, ranked inconsistency of Siri to accurately fulfill a request as the biggest frustration they have with Siri on HomePod. My iMessage point is made clear by anyone, and there are many, who have consistent issues keeping iMessages in sync between their devices. I have a Mac, iPhone, and iPad and not a week goes by that I don’t have some issue with iMessage keeping messages in sync across devices. I know I’m not alone based on this exact complaint by many on Twitter I see in my timeline.

Even as Apple News+ rolls out, I find myself having consistency problems. Magazines I’ve downloaded are not synced across different devices meaning I have to go re-download them on a new device. Inconsistencies and inconsistent experiences with core services will not be things consumers tolerate when they shell out money on a monthly or annual device. Apple’s strength has never been in more cloud-based service-based solutions, and it is the area I see them most at risk when it comes to integrating a core services business as the third leg of Apple’s total solution.

I’ve mentioned this before, but sometimes I wonder how much more vertical they will have to go in the cloud in order to control more of their services destiny. As Bob O’Donnell and I mentioned on the Tech.pinions podcast last week, Apple’s services run entirely on someone else’s cloud platform, which is a bit of an oddity for how Apple usually does things. Perhaps a more vertical approach to cloud is in Apple’s future.

The other could be a few acquisitions of solid backbone cloud companies. I’ve seen investors mention Box or DropBox as options and bringing on the team from those companies who tend to be very good at a cloud services/synchronization approach. Ultimately, this is an important area to watch for Apple because the next innovation cycle, be it AR glasses, or something else is going to be hardware and services driven much more so than smartphones ever were.

Intel Helps Drive Data Center Advancements

At last week’s Intel Data-Centric launch event, the company made a host of announcements focused on new products and technologies designed for the data center and the edge. Given that it’s Intel, no surprise that a large percentage of those product launches focused on CPUs designed for servers—specifically, the second generation of the company’s Xeon Scalable CPUs, formerly codenamed “Cascade Lake.” However, as I’ll get to in a bit, the largest long-term impact is likely to come from something else entirely.

Similar to the first-generation launch of Xeon Scalable back in July of 2017, Intel focused on a very wide range of specific applications, workloads, and industries with these second-generation parts, highlighting the very specialized demands now facing both cloud service providers (CSPs) and enterprise data centers. In fact, they have over 50 different SKUs of Xeon Scalable CPUs for those different markets. They even added a dedicated new line of CPUs specifically focused on telecom networks and the needs they have for NFV (network function virtualization) and other compute-intensive tasks that are expected to be a critical part of 5G networks.

A key new feature of these second-generation Xeon Scalable CPUs is the addition of a capability called DL Boost, which is specifically designed to speed up Deep Learning and other AI-focused workloads. As the company pointed out, most AI inferencing is still done on CPUs. Intel is hoping to maintain that lead through the addition of new vector neural network instructions (VNNI) to the chip, as well as additional software optimizations it’s doing in conjunction with popular AI frameworks such as TensorFlow, Caffe, PyTorch, etc.

Despite all the CPU focus, however, the sleeper hit of the entire event, in my mind, was the release of Optane DC Persistent Memory, which works in conjunction with (and only with) the new Xeon Scalable CPUs. Based on a technology that Intel has been working on for 10 years and talking publicly about for about 1 year, Optane DC (short for Data Center) Persistent Memory is essentially a low-cost compliment for traditional DRAM that allows companies to build servers with significantly more memory (and at a much lower cost) than would otherwise be possible. Available in 128, 256 and 512 GB modules (which fit into standard DDR4 DIMM slots), this new memory type adds an entirely new layer of storage and access hierarchy to existing server architectures by offering near DRAM-like speeds but with the larger capacities, lower costs, and persistence more similar to SSDs and other types of traditional storage.

In real-world terms, this means that memory-dependent large-scale datacenter applications, like AI, in-memory databases, content delivery networks (CDNs), large SAP Hana installations, and more, can see significant performance gains. In fact, at several different sessions with Intel customers who were early users of the technology, there was a tangible sense of excitement surrounding this new memory type and the benefits it provides. Quite a few discussed using Optane Persistent Memory with some of their toughest workloads and being pleasantly surprised with the outcome. As they pointed out, many of the most challenging AI workloads are more memory-starved than compute-starved, so opening up 6 TB of active memory in a two-socket server can make a very noticeable (and otherwise unattainable) impact on performance.

Optane Persistent Memory is also the first hardware-encrypted memory on the market, thanks to onboard intelligence Intel designed for the device. Intel provides two modes for the Persistent Memory to operate: the first, called Memory Mode, is a compatibility mode that lets all existing software run without any modification, and the second, called App Direct Mode, provides greater performance to applications that are adjusted to specifically work with the new memory type.

In addition to the Xeon Scalable and Optane announcements, Intel also discussed new intelligent Ethernet controllers designed for data center applications, and some of their first 10nm chips: the new Agilex line of FPGAs (Field Programmable Gate Arrays—essentially reprogrammable chips). Though they are typically only used for a limited set of applications, FPGAs actually have a great deal of potential as accelerators for AI and network-focused applications, among others, and it will be interesting to see how Intel continues to flesh out their wider array of non-CPU accelerators.

All told, it’s clear that Intel is now thinking about more comprehensive sets of solutions for data centers, CSPs, and other institutions with high-performance computing demands. It is a bit surprising that it took the company as long as it did to start telling these more all-encompassing stories, but there’s little doubt that it will be a key focus for them over the next several years. Yes, CPUs will continue to be important, but the reinvention of computing, memory, and storage architectures will undoubtedly yield some of the most interesting developments to come.

Podcast: Intel Data-Centric Event, Cloud-Based Gaming

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing this week’s data center-focused Intel launch event, with a discussion on both their new products as well as what it says about the current state of data center, and chatting about new gaming research and the strong opportunity for multi-platform gaming services.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Software’s Evolution to Services

The services narrative is a hot one right now for a variety of reasons. But one thing worth pointing out, that piggybacks on my analysis from yesterday is how innovations in the data center and the underlying technologies powering them, are making a cloud-first world much more of a reality than ever before. These innovations make it possible for software’s evolution to services.

All Software Evolves to a Service?
Will all software evolve to a service? In many conversations I’ve been having around the industry, the idea that services will eat the world has been a major theme. This point is designed to take Marc Andreessen’s famous software is eating the world article and point out that software is becoming a service. Whether all software becomes, a service is an interesting debate. I think you can argue most software will become a service, but the business model behind that service may vary.

One interesting observation we have seen through the years is how trends seem to take shape in the enterprise first, then find their way to consumer markets. Sometimes it is the other way around, but generally, the enterprise is where certain trends work their kinks out then find their way into consumers hands. This evolution of software into a services model is one area where I see this happening.

SaaS, or software as a service, has been a defining trend in enterprises for years now. Not only is it a huge market, but more and more enterprises are being run on software which is actually a service. The high demands of the business world, and the speed at which business happens demands more agile software platforms. This makes cloud-based software/services a more attractive proposition when it comes to workflows but is also more easily manageable by the corporations IT department. This model is win-win for all involved.

While the type of services that are delivered to an enterprise will differ from those consumers adopt, I’ve always liked the line of thinking that families are not that different from enterprises. Things like financial management, collaboration, communication, organization, schedule management, etc., are all commonalities families have with enterprises. Yet the way enterprises do things, is largely much more efficient than a family which begs the question about how more of these services can find their way into consumers lives.

One could argue this is an opportunity for platform providers like Apple, Google, and Microsoft but even in that scenario, the solution would still be a services-focused approach delivered through software.

Native vs. Cloud
So here we are again in a native app vs. cloud software discussion. From where the conversation started many years ago, I think we have a lot more clarity on how this will play out. The ideal solution is a hybrid where what runs natively is the shell software and UI, but much of the backend part of the service is run in the cloud. Think Netflix, or Amazon, where the native software is just the interface, and that interface is dynamic and therefore can be changed, customized, personalized, etc., because of all the work happening on the backend.

This is becoming more common in the enterprise and seems like the way forward for consumer software/services as well. Where this debate starts to get interesting now is the role of hardware in a services-driven world.

It’s worth asking the question if companies will prioritize certain hardware platforms going forward. Historically, there has been more of this hardware prioritization, but in the future, I’m not so sure how common it will be. Much of this depends on the platforms, but I have a feeling only Apple will prioritize certain hardware over others with their services approach. For the rest, including Google, and Microsoft, they will leverage more of their cloud platform backend for computing and thus can minimize the need to optimize their services for the variety of hardware in the market.

While it has historically not been uncommon for Apple to be unique in their approach from others with their strategy, it seems even more clear now that Apple’s services are designed to deepen loyalty and lock-in of the Apple experience around Apple’s hardware where other companies are going to focus more on deepening lock-in around the services themselves which can run on any hardware.

Different approaches will yield different results. However, in an extremely competitive services environment, Apple’s services strategy which in part is to preserve future hardware loyalty is going to have been genuinely competitive, or Apple may see competition chip away at the dependencies around their ecosystem they have built up over time. A cloud first/services first world does not necessarily fit in Apple’s sweet spot, but if they can deliver with their services, then they remain well positioned for the long-term.

Ultimately, the pace in which the world is becoming cloud first, even though consumers don’t think about it this way, is fascinating and is already changing the nature of competition in consumer markets.

Scooters, Bikes, and ‘The Third Lane’

As a long-time telecom analyst, I’ve done numerous projects and written countless reports about the ‘last mile’ problem. In fact, one of the most promising use cases for 5G is using wireless to get from a fiber drop (or small cell) to the home, since FTTH has proven so cost-prohibitive. But Tim Bajarin’s Techpinions column on Monday, “How Scooters Are Rewriting our Views Of Personal Transportation”, got me to thinking about an equivalent problem that exists in how we get from A to B – and how electric scooters, bike/e-bike sharing, and the like can help with that ‘last mile’ challenge. Tim wrote that his Element folding scooter has helped with that ‘last mile’ in certain instances. I’d like to expand on that concept in this column. And introduce another metaphor: The Third Lane.

Two weeks ago, in a column on “Tech’s Unintended Consequences”, I wrote, as a prominent example, about how Uber, Lyft, and the TNC are creating enormous congestion problems in major cities. At the same time, investment in public transportation has waned, leading to a vicious cycle of higher fares and declining service. But these ‘personal transportation solutions’ (PTSs) could be part of a broader solution for the last mile, and in a way that enhances, rather than eviscerates, our current transportation infrastructure. Consider public transport, particularly in close-in suburbs (rather than cities, where more is walkable). Often, a bus or a commuter train drops you a couple of miles from your final destination. Which is where TNCs have proven valuable, but also clogging roads that were never built to handle that sort of volume. Or think about how, in Silicon Valley, there’s a train that connects the major towns (Palo Alto, Mountain View, etc.) but from there people have to fan out to offices from A to Z. Or in my home town of Boston, where an entire new neighborhood and series of office buildings (the Seaport) was built, employing tens of thousands of people, but requires a 1.5-2 mile walk, often in crappy weather, from the closest subway station. The TNCs (and private shuttle buses subsidized by companies) have come in to fill that gap. But they are not a viable long-term solution for so many one-of, one-person, short-haul trips.

This is where these ‘third option’ solutions such as bike sharing, e-bikes, and electric scooters can help fill an important gap. While they might not be optimal for a commute of more than 5 miles for most people (and the bike lane infrastructure might not exist for that entire route), they’re perfect for a couple of miles. The issue is, what part of the road can they use? They’re not permitted on sidewalks. And in many cities, there aren’t adequate (and protected) bike lanes. As a result, the percentage of people who use PTSs is limited to about 5%, mainly zealots and the fearless/intrepid.

So I’m going to borrow a concept coined by Starbucks founder Howard Schultz, who described his cafes as ‘The Third Place’. If PTSs are going to be a viable component of a multi-modal transportation solution, they need a safe and enjoyable passage from parking junctions and transport stops to their final destination. I call it ‘The Third Lane‘. That means reconfiguring or building a protected road lane or part of a sidewalk that radiates out to places people live, work, and play. We might not be able to build safe lanes everywhere for each person’s individual commute, but we can think about corridors that serve large clusters of people. In addition to being protected, these lanes need to be properly surfaced, since scooters and bikes can’t handle potholes, sewer ruts, and the like, in a way that cars can.

A critical piece of this is that municipalities have to be part of the overall planning. We can’t have Lime, Bird, etc. barging in, and then getting regulated after the fact, having befriended no one. Think about downtown Atlanta as an example. There’s a MARTA stop in Buckhead (with no parking), and then there’s probably 100,000 people employed within 2 miles of that station, at numerous office building clusters. What if they added a lane/path/portion of sidewalk along key corridors that people could use PTSs that they reserve in advance, to within a couple hundred meters of their final destination? And, as an alternative/supplement, a system of buses along dedicated lanes that run in a loop, sort of like the airport rental car shuttle?

This mentality exists in some of the denser, more forward-thinking cities. In places like Amsterdam and Copenhagen, all four modes of transportation are on relatively equal footing, from a planning perspective: car, public transport, bike/PTS, and pedestrian.

So, my message to the Limes and Birds of the world, with your dockless PTSs: use your goodwill, your data, your AI, and your tens of billions in valuation to work with local governments to create viable ‘third lanes’ along key corridors. Pick a couple of signature projects, where there are large numbers of workers who need to get a couple of miles from a transport or parking hub: South Station to the Seaport in Boston; Buckhead in Atlanta; Georgetown in Washington; from downtown Miami’s emerging multi-modal hub to the Brickell area, etc.

I realize it’s not easy, and that in many cities, finding the ‘real estate’ for that third lane is a big challenge. But it would be great to try this, in a greenfield sort of manner, in a few spots where it’s both viable and serves enough people where we could gather some good data. Sort of like some of the larger scale ‘smart city’ demo projects that have kicked off in places like Amsterdam and Toronto.

All Eyes on Data Center Innovation

I know the data center is not the sexiest of topics. But what I’m finding most interesting about the state the industry is in at the moment, which specifically is a lull of innovation in consumer hardware, is the rapid pace of innovation happening in the data center.

One of the tricks of being an analyst is to cast a wide net and look for patterns. As many of our readers know, some of the patterns I like to focus on are ones I see happening in semiconductors. My saying goes “it is much easier to predict the future when you understand the semiconductor industry roadmap.” Right now, nearly every semiconductor company has shifted resources and focus to the data center.

I appreciated this analysis by my friend and fellow industry analyst Kevin Krewell, who dug into NVIDIA’s announcements from their recent event and outlined how NVIDIA is now a data center company. While this does not mean NVIDIA is not going to keep competing in consumer graphics, the reality is the upside for NVIDIA, as well as a significant focus of their research and development dollars, is going toward technology designed for the data center. What’s more, is to look at NVIDIA’s stock price as they transitioned from consumer graphics to a data center standard.

NVIDIA reached all-time highs, and will likely get back there once investors mentally grasp what is happening in crypto, and with so much innovation still ahead in the data center, you can bet there is still upside growth.

Another storyline is AMD. In fact, perhaps the single biggest point to showcase how bullish investors and the broader market are on data center technologies is to look at AMD’s stock. In case you have never seen it’s a long-term arc, here it is.

For most of its life, AMD traded below $5 a share. But as the upside in cloud technology and the huge need for more data center technology expanded, AMD benefitted. Even though they still have a small share of the data center market and only a small double-digit share of the PC market, AMD’s PE ratio roughly six times Intel’s. Cloud platform providers like Amazon, Microsoft, and Google like having supplier diversity and are even segregating product offerings, or instances, based on specific beneficial technologies provided by these silicon companies. The data center TAM is a big one from a dollar standpoint looking to grow beyond 300 billion dollars sooner than most forecasts. AMD still likely has some data center specific announcements to come this year, but yesterday was Intel’s time in the spotlight.

Competing with Intel’s Monolithic Integration
At Intel’s analyst day last December, I recall then interim CEO, now permanent CEO, Bob Swan use state Intel’s doubling down on monolithic integration as a core company strategy to succeed. Yesterday’s data center product lunch of the Cascade Lake architecture was monolithic integration on full display. Intel showcased an architecture that includes CPU, Accelerators/FPGA, Memory, Ethernet/networking silicon and the software stack to tie it all together. The only thing missing was a GPU, and we all know that is coming.

Adding the GPU will be filling the most significant gap for Intel once they do it. Some are skeptical, but if any team can do it, the team they have built is the one. However, it was interesting during Intel’s keynote that 50% of machine learning inferencing will take place at the edge. If this holds, then it is good for Intel and forecasts for inference alone is a 10 billion dollar TAM in 2021. Intel’s integrating of the hardware and software stack to take on the inference market will alleviate concerns of their lack of GPU for now.

When it comes to Intel’s integration, monolithic at that, which is the bread and butter of the company, I do wonder what pressure that will put on a competition to follow a similar path. NVIDIA buying Mellanox is perhaps a step in the direction as NVIDIA adds a networking dimension to their portfolio. NVIDIA also has architectural benefits to their GPU solution and software stack which can absorb more tasks from the CPU should the market want that. I strongly doubt NVIDIA will make a move to CPUs, but nothing is impossible.

At a high level, the fascinating part of how the data center is evolving is true whether heterogeneous solutions are what cloud platforms want, or if we can imagine Amazon, Google, and Microsoft simply offering products and instances based solely on Intel, NVIDIA, and AMD solutions separately.

Going back to the pattern that I think is most interesting here, and that is the clarity that almost all the major investment and innovation from silicon companies is moving to the data center assures us that much of the world is now finally moving to cloud first. Even roping Apple into this conversations, their major push with services is a cloud-first strategy and has implications on how the company thinks about their own vertical strategy when it comes to their data center and their services.

We have long talked about what the world would look like when most the computing is down in the cloud, and it seems we are now able to see a timeframe where that future becomes a reality.

The Grass is the Greenest it Has Ever Been

I make no secret that I have been using an iPhone as my primary phone since 2007. As an analyst, I have to try different products in all kind of categories including phones, which means I have been using Android phones as well as what has come before from Windows Phone to PalmOS and every flavor of feature-phone before then. No matter what phone I tried, however, my trials end up with me going back to the iPhone. There are two main reasons for doing that: one is that I prefer the UX and the second is that the value in using devices across the Apple ecosystem is much more evident to me.

In February, I got the Samsung Galaxy 10+ to test, and I was expecting to follow a similar pattern to previous Samsung’s phones trials, which is that I love the design and the way they fit in my small hands, I like the camera, but ultimately I am overwhelmed by the UX. What ended up happening instead, is that I am still using the phone and I have seriously thought about making the switch. So here is what has changed and what is holding me back.

A Cloud World

Maybe it is the maturity of the smartphone market, which has led to app parity for the most part. Or perhaps it is the fact that even in a home like ours that has more Apple devices than any other brand we have happily let other ecosystems come in and take a slice of our time and money pie. Or maybe it is the combination of the two that helps consumers move from device to device more easily. Of course, there are hardware differences and some proprietary apps or features all across the various ecosystems and differences in how brands approach privacy, but the point, I think, is that with services and apps that go across devices thanks to the cloud moving across ecosystem is more comfortable than it has ever been.

Ultimately this is why I think Apple is doubling down on services. Yes, they will get an extra revenue source, but more importantly, they will create more stickiness to their ecosystem which will lead consumers to think twice before moving on. While using the Galaxy S10+, for instance, I was quite happy to move from CarPlay to Android Auto and have Google Assistant promptly bring up whatever song I wanted from Apple Music while I was driving, but unable to play my Playlists even when they are available in iCloud.

Two Things Are Holding Me Back

There were two things that I particularly missed when using the Galaxy S10+ and the Galaxy Watch Active, and both are not out of reach for Samsung.

The first thing I missed while using the Samsung Galaxy S10+ was iMessage. It is not about the green and blue bubble, nor it is about saving on text messages. What I missed was the ability to send a message from any device I was on as I usually do making iMessage a core part of how I communicate at work. Unfortunately, while Windows 10 has made some progress in supporting text messaging across Android the experience is just not as fluid. I hope that Samsung will spend some time creating a better-optimized app that goes across their phones, tablets and Windows PCs. I do wonder how many iPhone users will consider a move to Android if iMessage were available as a cross-platform app. While there are other apps that I use to talk to people iMessage is by far what I rely on every day.

The second thing I missed was the deep integration that comes from controlling all the pieces of the experience. The best example possibly being the vibration the Apple Watch gives out when you are using Apple Maps directions, and you should be taking a turn. I prefer Google Maps to Apple Maps, but as Google has given up on designing an Apple Watch app, I end up using Apple Maps when I drive. With the Galaxy Watch Active, which I see as the best alternative to Apple Watch in the market today, I missed that gentle tapping, a feature that might be hard to implement due to the combination of the watch running on Tizen and Samsung not controlling the experience on the Google Maps side.

As you can see, both my examples have little to do with Samsung’s hardware and a lot to do with the limitations Samsung is facing because they are not controlling all aspects of my experience.

More Confidence, not Technology Would Make Samsung’s Devices More Desirable

Aside from not being able to control the full experience, I also noticed that the options that are available on Samsung’s devices are just too many. Yes, there is such a thing as too many choices. Especially for consumers coming to Samsung from iOS, I think the available range of options can be overwhelming. In a way, consumers on iOS are used to Apple making decisions for them. When it comes to settings, users can, of course, change them but by and large, Apple is picking defaults that many users will never change mostly because of convenience or because they are not savvy enough to go and replace them. In most cases, Apple’s choice does not hinder user experience and simplifies things for the user.

It seems to me that Samsung has opted for the opposite and they believe there is value in giving users all the options there could be and let them figure it out. The new One UI helps by surfacing the most common use cases and settings, but you can easily find yourself three layers down in the options menu at any given turn. I am not sure if this broad set of options are the manifestation of a lack of confidence by Samsung, but I think that over the years their software implementation has improved and so is their understanding of what consumers want rather than what is technically possible so they could make those choices for their users. The camera UI is an excellent example of where Samsung has spent some time making decisions on default settings and leaving options to the more advanced users, but there is more room for simplifications in my view.

Making those decisions for consumers will also improve the cross-device experience that you will get from owning multiple Samsung devices. Samsung might be at a disadvantage because they are not controlling the underlying OS, but this disadvantage can turn to an advantage as consumers come to care more and more about an in-app experience and find best of breed products. In other words, if productivity is what I care about the most, I am likely to find a PC and phone combination that empowers me to be efficient, and this might mean that my two devices are not running on the same OS and are not part of the same ecosystem. The same can be said about gaming or media consumption. We have seen Samsung work with many partners to bring unique experiences to their products. I hope that such partnerships will extend to developers as well at the next SDC in the Fall. Pointing at the large installed base of devices that developers have access to is useful, but working with them to create better experiences is critical for developers and creates more stickiness for Samsung.

Two big reason’s Mark Zuckerberg is calling for more government regulation over the Internet

Over the weekend, Facebook CEO Mark Zuckerberg, wrote an OpEd piece in the Washington Post that calls for new standard bodies, government oversight and potential regulation over the Internet to deal with privacy, security, hate speech, and fake news.

Zuckerberg has faced serious criticism for allowing Facebook to become a vehicle for more than just social media and a place where people can post almost everything from conspiracy theories, hate speech, fake news and opinionated and often bigoted content. While it has, it’s own rules and regulations, and its business model had kept them from being as restrictive in blocking objectionable content and, in my opinion, allowed Facebook to become something that I don’t believe Zuckerberg could have imagined when he created Facebook.

All of the things Zuckerberg listed should be looked at closely by governments around the world. As he points out:

“Internet companies should be accountable for enforcing standards on harmful content. It’s impossible to remove all harmful content from the Internet, but when people use dozens of different sharing services — all with their policies and processes — we need a more standardized approach.”

He also lists legislation to help protect elections, privacy and security and data portability.

Although there are a lot of reasons he felt compelled to suggest more government oversight over the Internet, I see two key dynamics in place in Zuckerberg’s OpEd that is very specific to Facebook and his role as its CEO.

The first is that this is an admission that he and their team have failed miserably in managing Facebook’s content controls and that the type of material that gets through their filters has gotten out of control. He and his team have been backed into a corner due to this mismanagement of their site’s content. By themselves, they no longer can protect their users from bad content without any help from either a broad standards body, working with government regulators, or direct government regulation that helps keep fake news, threats, conspiracy theories, etc. from ever getting posted on Facebook.

Second, Zuckerberg and team need cover, or someone else to blame, for better policing of the content that is allowed to be posted on Facebook. In a sense, he is asking for help to manage the future of Facebook. If Zuckerberg tried blocking sites or specific content that is harmful beyond their current rules and regulations, he would be caught in a First Amendment battle that he can’t win. While he has the right to block certain sites based on their content that does not comply with their guidelines, there are too many gray areas that would rile up folks of many political and ideological spectrums that he needs help in determining what ultimately is OK to be posted on Facebook.

This is a smart move by Zuckerberg and his team. Many politicians are already on their case who want to either add more controls than Facebook would like or even break them up, as Senator Warren has suggested in her quest to take on many tech companies and their real or perceived power. By enlisting governments to help them deal with their own content problems, he can potentially head off even greater government regulations that could cripple their business model.

He also gains the cover to be more aggressive in blocking sites and objectionable content that is extreme and are used for propaganda, fake news, privacy intrusions, etc. Having standard bodies or direct government regulation can go along way, especially in the US, to holding off First Amendment battles that are impossible to win.

Investors will also likely view regulation as a good sign at this point. Should Facebook become regulated, there is a good chance. Their stock rises as the street will understand that regulation will secure Facebook’s dominance. An underlying theme around regulation protecting incumbents has been notably observed throughout business history. Whether or not protecting their dominance is a reason for Zuckerberg to request regulation is unknown, however, it is clear regulation for Facebook could potentially hurt competition and make force regulation on smaller new social media services even before they have a chance to grow and challenge Facebook’s position.

Gaming Content Ecosystem Drives More Usage

As I wrote a few weeks back, the gaming market is very large and extremely diverse. People spend extraordinary amounts of time and money playing games across an increasingly broad range of devices and platforms. From marathon gaming sessions on tricked out desktop PC gaming rigs, to snippets of game “snacking” on smartphones while standing in line or killing time in other situations, digital gaming has become a mainstream part of our culture.

In fact, gaming has become so popular that it’s created an entire ecosystem of gaming-related content and other activities, such as professional gaming eSports, that have proven to be enormously (though sometimes a bit surprisingly) popular as well. Game streaming video networks like Twitch here in the US, or Douyu over in China, draw enormous audiences measured in the millions on a daily basis for the live game streams they typically show. Similarly, Google’s YouTube and the China-based Youkou Tudou now host an enormous amount of recorded video content created by both professional and amateur gamers for others to watch.

In a recent survey by TECHnalysis Research on gaming trends among US and Chinese consumers, we discovered that US gamers who participated in the survey said they watched about 12 hours of gaming-related content a week between Twitch and YouTube, while Chinese gamers averaged 11 hours between Douyu and Youkou Tudou. They’re impressive numbers to be sure, but they’re downright staggering when you add them to the average 65 hours of gaming time (in the US) or 47 hours (in China) they said they did as well on a weekly basis. To be clear, these numbers were self-reported (and a series of checks were put into place to try and make them realistic), but they’re likely too high. Still, regardless of the exact numbers, it’s clear that gaming and related activities takes up an enormous amount of many people’s non-sleeping hours.

On top of that, a surprisingly large group of gamers said they created gaming content through their own live-streaming efforts and/or uploading of their own recorded games. As Figure 1 and 2 show below, this is particularly true in China, where PC-based gaming is even more popular than it is here in the US.

Fig. 1

Fig. 2
As you can see, creating original content is still done by a bit less than half of the US respondents (the numbers add up to more than 100% because some people both live-streamed and uploaded content), whereas only 40% haven’t tried it in China yet. What’s particularly interesting, though, is that the practice is remarkably strong in both countries up through the 35-44 age group—it’s not just millennials that are doing it.

Similarly, it’s not just millennials who are participating in gaming competitions and watching professional gaming via eSports TV shows and events. Driven in part by the popularity both of gaming and gaming content, the eSports phenomenon has taken many by surprise. As the survey results indicate, however, it’s also very popular with many gamers, with around 65% of US gamers and about 82% of Chinese gamers saying they had watched or even attended a professional gaming event. Watching those tournaments also clearly inspired many gamers to participate in their own gaming competitions, just as watching other professional sports often encourages participation in them. An impressive 66% and 65% of US and Chinese participants, respectively, said they had participated in either a PC, smartphone, or game console-based competition. Again, the participation rates stay fairly consistent through the 45-54 age group in the US and the 35-44 demographic in China.

Even more importantly, all this consumption and creation of gaming-related content is inspiring gamers to spend even more time and money on their gaming habits. In a classic virtuous circle type of model, the interest in gaming drives interest in gaming-related content, which in turn drives yet more interest in gaming, and on and on.

Obviously, there are limits to how far the gaming phenomena can extend and there are some people who are already facing challenges with balancing their game time with the rest of their lives. Still, it’s apparent that gamers feel very passionately about their hobby and it’s equally apparent that it represents a lucrative opportunity for companies who can tap into that passion. Given the level of engagement that many people have with gaming, and the growing ecosystem that now surrounds it, that opportunity is sure to last for many years to come.

(You can download highlights of the TECHnalysis Research Multi-Device Gaming Report here.)

How Scooters are Rewriting our Views of Personal Transportation

Now long after the original Segway was launched, I had the privilege of being able to test one. I had met its creator, Dean Kaman, at a dinner in San Jose a year before the Segway launch. Others who had actually been told about it like Steve Jobs and noted venture capitalist John Doerr, who went on record saying they felt this product by Mr. Kaman would be a game changer.

While the Segway did make a splash and did get interested as a short-range mode of transportation, it never took off. In fact, it was even banned in some cities for use on sidewalks as it was a nuisance to pedestrians and deemed unwelcome in many other cities who refused to let people use them on city streets.

But the one thing that the Segway did is to introduce what is called last-mile transportation link. And it has birthed the current “ last mile” mobile electric vehicle of the moment in the scooters that populate the streets and roads of many cities today.

The chart below shows the areas of the world where scooters are taking off.

Here in the US, they are also populating large cities, but in many, they have become controversial due to three key factors.

First, without any regulation, scooters, like the ones from Bird and Lime, started showing up in huge numbers in large cities and were more a nuisance than a welcomed vehicle for last mile journeys. Many cities banned them outright in order to develop rules and regulations guiding their use in these cities as well as make companies bid for the chance to place their scooters in these towns. Most cities now have solid regulation in place to control how many can be placed in a city, as well as having the proper insurances and guarantees that they are picked up at night to keep the sidewalks from being clogged and scooter’s under a semblance of control.

Second, is the fact that they can be dangerous. These scooters, while not speed demons, do travel at around 15 miles an hour and if you fall off at that speed, you could be injured. The chart below lays out the most common injuries.

CNET spoke to Trauma Centers in multiple cities to get feedback on the kind of scooter-related injuries they were seeing-

“CNET spoke to trauma centers in Denver, San Diego, San Francisco, and Austin. All reported an uptick in injuries from scooter accidents. It’s been just a few months since the vehicles were unleashed onto city streets, so emergency room doctors say they’re only beginning to collect data.

“We see some scary injuries,” said Dr. Chris Colwell, chief of emergency medicine for Zuckerberg San Francisco General Hospital and Trauma Center. “There’s still a lack of recognition of how serious this can be.”

Colwell said his emergency room is logging about 10 injuries a week. They range from extensive bruising to severe head trauma. Given the hills in San Francisco, he also sees a lot of road rash. “We saw a guy who fell over on his back this week,” Colwell said. “He ended up going through so many layers of skin, and we had to essentially put him to sleep to clean out the gravel embedded in his back.”

Bloomberg recently reviewed a study from the JAMA Network Open and found the following-

“The vast majority of the injured were riders as opposed to pedestrians. They averaged around 34 years of age and were 58 percent male. The study revealed a general lack of operator adherence to traffic laws or warnings by the scooter companies themselves, according to an article published Friday in JAMA Network Open. Though scooters can reach 15 mph, less than 5 percent of riders were reported to have been wearing helmets.
About 40 percent of patients had head injuries, and almost 32 percent suffered broken bones. The study said a significant subset of the injuries occurred in patients younger than 18.
The researchers don’t try to compare your chances of getting killed on (or by) a scooter versus a car, but rather the physical damage being wrought. And how did these 249 California riders get hurt, exactly? More than 80 percent just fell off, according to the study. Eleven percent hit something.”
https://www.bloomberg.com/news/articles/2019-01-25/electric-scooter-injuries-pile-up-but-lawsuits-are-hard-to-make?cmpid=BBD012519_BIZ&utm_medium=email&utm_source=newsletter&utm_term=190125&utm_campaign=bloombergdaily

This is a significant issue and one that will plague scooters for years unless riders do more to protect themselves, such as wear helmets and even knee and elbow pads when riding.

The third area is the business model. The Bird and Lime scooters cost around $550 each and have a life span if only around 1-2 months at best.

ExtremeTech talked with Quartz Report’s Ali Griswold, who did a study on one of the scooter programs in Louisville, KY-

“Quartz reporter Ali Griswold performed an analysis on revenue-per-scooter using open data sets provided by Louisville, KY. The question of how much revenue companies earn per scooter is an interesting one.

“Griswold’s analysis was made possible by the fact that the initial Louisville KY data sets included a unique identifier for each scooter, allowing her to track how long the vehicles persisted in the city. Later data dumps have removed this modifier, likely to prevent the kind of analysis she performed.

What she found is that the average scooter lived 28 days, with a median lifespan of 23 days. Focusing only on the oldest vehicles in the data set improved this slightly, to 32 and 28 days, respectively. Using the oldest vehicles for a baseline and excluding December data (her data set ran from August – December), the median vehicle took 70 trips over 85 miles.

When you run through the various costs and revenue, there’s just no way these companies are doing anything but losing huge amounts. The average revenue generated per scooter in Louisville, at least, comes out to between $65 – $75. Data available elsewhere online suggests each scooter costs Bird $551. The company wants to get that down to $360, but even so, it’s losing $285 – $295 per scooter deployed in the Louisville area.”

I have read other stories on the economics of the on-demand scooter business, and they all question the business models and if they can ever be profitable and sustainable.

There is another interesting economic model around scooters developing, and that is one in which a person buys a scooter and uses it as needed for last mile transportation.

Although I have used a Lime Scooter once or twice for short distance travel, last fall I tested the Element Folding Electric Scooter from Jetson.

This particular model costs $299, weighs 18.74 lbs and has a range of up to 10 miles. They have a higher end longer range and more durable model called the Quest, that sells for $539.00, Weights 28.4 lbs and has a max range of 18 miles.

If you were going to use this a lot, then the Quest would be the better purchase. But in my case, the Element meets my needs as I mostly use it to go to the local grocery store, and around the neighborhood, although I have packed it in my trunk and taken it with me to Downtown San Jose and used it there to travel short distances to meetings.

The lightweight of the Element makes it easy to put in my car’s trunk, or in the bottom of the grocery cart at the store. Portability is very important to me and having something like this in my car for last mile journeys has been quite useful.

Both Jetson Scooters get high ratings, and so far the Element has held up well over the six months I have been using it. Of course, many other companies, such as Segway, Xiaomi, Razor, Gotrax, Gilon and others see the market for personal scooter usage and are ramping up new models for our market.

I am pretty health conscience as well as balance challenged so I won’t use the scooter without a helmet and at least elbow pads. But so far I have had no major issue with the Jetson Element and continue to use it as needed.

While owning one’s own scooter is not for everyone and the on-demand model that Bird, Lime, and others are using has merit, especially in big cities where getting to a location fast and easily is called for, I have a sense that the ownership model has some serious legs and could become one of the more interesting way’s scooters are used in the future.

Podcast: Apple Services Event

This week’s Tech.pinions podcast features Tim Bajarin, Carolina Milanesi and Bob O’Donnell analyzing this week’s services-focused event held by Apple, including discussions around the new Apple Card credit card service, the TV+ streaming TV and content aggregation application, the News+ magazine service, and the Apple Arcade gaming service.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Cloud-Streamed Gaming: More Questions Than Answers

Gaming is an ultra-hot topic in the world of tech right now, made clear by recent big announcements from both Google (Stadia) and Apple (Apple Arcade). Gaming has been one of the bright spots for a challenged PC market, with PC gamers willing to refresh hardware more often than other consumers and to spend more money every time they do it. Gamers spend money in online gaming marketplaces, in a still-thriving console gaming market, and—of course—increasingly on mobile platforms such as Android and iOS. All told, gaming drives huge profits across a wide range of companies, and that’s before we start adding up the dollars associated with eSports. One issue that’s becoming increasingly clear, however, is that in the near term the gaming market is likely to experience some growing pains as new technologies become available and long-standing business models face disruption.

Cloud Gaming, or Cloud-Streamed Gaming?
At IDC we’re about to embark on a very ambitious gaming survey and forecast project. We plan to run surveys in five countries (U.S., China, Brazil, Germany, and Russia), capturing responses from more than 12,000 respondents, including hardcore and casual gamers as well as dabblers and nongamers. Our goal: to better understand consumer sentiment around everything from hardware and brand loyalty to device refresh rates to spending on software, services, and accessories. We’re also devoting an entire section on cloud-streamed gaming. One of the most challenging things to do in any consumer survey is to ask respondents about technologies or services that are not yet widely shipping (or understood). As we’ve been building our survey, we’ve had some spirited internal debates about the nature of cloud gaming versus cloud-streamed gaming and more.

What’s the distinction? As my colleague Lewis Ward notes, most games today have some cloud element to them. Certainly, every multiplayer console, mobile, or PC game that lets us play against friends, family, and strangers all over the world falls into this camp. Apple made a point of saying that Apple Arcade will let you play offline, a dig at Google’s online-only Stadia. However, in the next breath, Apple points out that you can jump from iPhone to iPad, Mac, and Apple TV, clearly utilizing a cloud component. And I’m guessing some of those new iOS-only games will have multiplayer modes.

So, essentially, we already live in a cloud gaming world. Stadia, however, is clearly cloud-streamed gaming, as subscribers will be able to play games across a wide range of devices (as long as they’re online and support a Chrome browser). As Ben Bajarin notes, other cloud-streamed services from companies such as Sony and nVidia are even more cross-platform friendly. We’re still waiting to see what other big players, such as Microsoft, will offer. Bottom line, however, is that the experience when it comes to a cloud-streaming gaming service will be highly depend upon that network connection.

And it’s here where Google left us with more questions than answers, at least for now. The company didn’t talk in its announcement about specific home broadband requirements or address network latency, which is what will dictate the quality of the experience on Stadia. It also didn’t talk about pricing tiers (or pricing at all). Will games streamed at 4K cost more than 1080P games? Will everyone get 60 frames per second?

At least one promise of cloud-streamed gaming is that with the CPU and GPU in the cloud, gamers can meet on an even playing field, regardless of the device upon which they are playing. Moreover, it means players can play games across their devices, instead of having games they can only play on PC, on console, or on their mobile device. In theory, it’s a very compelling offering, but as with most thing in technology, the real value won’t be apparent until we see the execution.

Impact on PC and Console Gaming Markets
If cloud-streamed gaming delivers on its promise (or perhaps the right question is not if, but when) and it removes the local computing power from the gaming equation, what will the impact be on the hardware vendors that sell high-end gaming PCs, CPUs, GPUS, and more? I know at least one prominent PC gaming executive thinks cloud-streamed gaming will never impact his business and he argues that some people will always want a high-powered gaming rig to play on. To date, this has certainly been true. Hardcore gamers will spend big bucks to gain frame rate advantages that can mean the difference between life and death in a game. Frankly, that element of PC gaming has always bothered me a bit, as it effectively means that those with deeper pockets enjoy often significant in-game advantages.

However, it’s this desire to have the best that makes the PC gaming hardware market so appealing to vendors. Moreover, it’s this faster refresh cadence that also enables game developers to embrace next-generation technologies before any other consumer categories (ray tracing-capable graphics cards being a good current example). The PC gaming hardware market thrives in this cycle, so what happens if cloud-streamed gaming disrupts this? Where does that booming market go if in the future a person using a three-year-old $300 Chromebook has the same experience and gaming capabilities as somebody on a brand new $4,000 gaming rig?

In the end, that is the biggest question nobody can answer right now. Is cloud-streamed gaming disruptive, additive, or something else? The cost of access, the available games, the buy-in from eSports athletes, and the quality of experience will all play a role in the final answer. Regardless, it’s going to be very interesting watching it all unfold over the next few years.

Apple’s Services Crossroads

As I mentioned Tuesday, there is still a lot to say, but I wanted to write one more piece that adds some context before digging into the services products themselves.

I’ve always been fond of using the “only Apple” philosophy in much of my analysis of the company. Even Apple’s management likes to throw this phrase out from time to time to highlight things that are in Apple’s sweet spot and something only Apple can do. Apple is the single most integrated consumer tech company in the world, and that integration allows them to do things others can not. That integration starts with the hardware and extends to software and now services.

The “only Apple” philosophy must now be applied to services, and this adds an interesting new factor into the analysis. As of now, what is clear is that only Apple can deeply integrate these media-rich services like games, news, TV, and music, into Apple hardware. This is, and will always be Apple’s advantage over its competitors. In the same way that all of Apple’s products are designed to work together and get better the more devices you have, we should expect Apple’s services to work best on Apple hardware and work better the more services you consume.

That being said, integration alone is not necessarily a slam dunk for Apple when it comes to the success of their services. Firstly, Apple will have to avoid antitrust situations and likely can’t pre-install apps with services onto iOS but must let users download them of their own choosing. But, once downloaded, Apple can bake in integrated experiences to the overall device and OS that surface more value for their services vs. competition. From this perspective, I want to make a few points that will be an interesting watch and learn from, with Apple and services.

Competition and Cross Platform
From a business perspective, Apple has brought many new observations to the industry and business knowledge. In some cases, they have broken the templates used by business schools and challenged conventional wisdom. But, as often is the case, their integrated strategy provides more lessons to the industry and anything else. This entrance of their integrated services strategy will yield many lessons. Most of them being from a services competition viewpoint.

Apple having to straddle a blurry line of anti-trust with their services will be good overall for everyone. First, because they can’t pre-install things like Apple Music, or Apple News, or Apple TV without getting into anti-trust issues, they will have to genuinely compete with the likes of Spotify, HBO or Showtime, and other news services if they want consumers to download their apps and use these services. Apple has no inherent competitive advantage for premium/paid services in the way they do with hardware and software. While integration is the advantage of an Apple service after you choose it, it is not there by default and consumers will likely weigh all their options including competition.

This is why growth in the user base of Apple’s paid services will impressive because they aren’t starting with an integrated advantage like they are used to. But this also raises a question of cross-platform that I’ve been wrestling with for some time now.

Nearly all services are cross-platform. When it comes to web/digital services, being cross-platform seems like a checkmark to compete but companies developing these services usually only have a subscription revenue model, so they have to achieve scale. This is the fork in the road Apple will come to very soon. Is their services strategy about selling more hardware, or about growing the services business? That is the question.

While I could be wrong, given we don’t have much historical precedent to go off here, I’d argue that if growing the services business is the sole goal then Apple needs to bring certain services cross-platform. Apple News+ and AppleTV+ specifically. Services business require scale and while Apple has that in roughly a billion customers, they are up against the competition in services that are first and foremost cross-platform. Here is why I think that matters.

I’ve mentioned before that our research has shown us that when consumers are presented with a mostly subscription, they heavily weigh the benefits and more deeply scrutinize the investment. Services are not commodities or quick decision purchases. This is why free trials are almost entirely necessary for services. It is very hard to extract monthly money for a service unless the consumer has vetted it is worth it to them. When it comes to my cross-platform point, most consumers do not own 100% Apple hardware. Most of Apple’s user base has an iPhone and a Windows PC and a smart TV brand like Samsung, LG, or Vizio. That combination is the most common combination of the three main screens of the vast majority of Apple’s customer base.

Competitively speaking, I believe having access to content you subscribe to on all the devices you want to access them is important for a subscription service. So for example, Apple News+ becomes more interesting to an iPhone owner with a Windows PC when they can access that content on both. The evidence for this lies in Apple cutting deals to bring the TV app to third-party smart TVs. This strategy is solely for the purpose of consuming the AppleTV+ content one subscribes to on the TV of your choice. I’m convinced Apple needs to treat all their services this way, much like Apple Music on Android, if they want their services business to scale.

Another strong point here is the family plan. Most services don’t have family plans, at least not good or personalized ones for different family members. Which means Apple’s family plan concept is a point of differentiation. However, while it is unlikely to assume an Apple customer has 100% Apple hardware, it is even less likely an entire household has all Apple hardware. So for a family plan to hold its value and differentiation, a potential subscriber would want to know their family has access to the content they pay for on the hardware or platform of their choice.

If Apple does not check these boxes, then it seems likely they are up against tougher competition who is not going to create walls around their services. One quick point, is that certain services make sense to remain exclusive. Apple Arcade for example has no need to go cross platform and adds much depth to Apple’s customer value.

I’m fascinated by all this because we are in the somewhat new ground from a business standpoint, but Apple is also in new territory. Many important business lessons will emerge, and that is one part of many that make all this exciting.

The Regulation of Social Media Debate

If you follow me on Facebook or Twitter, you may have discovered that I have restricted my Facebook posts to pictures of Golden Retrievers, my favorite dog breed, and once in a while I even post some family and friend related G-rated content. My Twitter usage is used mostly for posting my columns and industry commentary.

Over the last five years, Facebook’s “almost anything goes” feature has gotten out of hand, and its lack of policing itself and keeping so much offensive material and fake news on its site is starting to slow its growth. Twitter is even worse. People post things here that run the gamut of blasphemy to outright fake news and will probably be the biggest site to post deep fakes eventually.

Both companies and many others that allow so much uncontrolled content get away with it because they are not traditional media sites. They have no regulation to restrict them. This is a problem that has major ramifications for democracy and even decency and more has to be done to reign in these sites from being propaganda tools for rogue governments and perpetrators of fake news.

Adam Lashinsky, in his Fortune Data Sheet newsletter posted on March 22, 2019, has a recommendation on how to deal with the Wild West of social media and offers his solution in this following exert-

“The solution to this is so simple, by the way: Repeal the legislation that’s responsible for it all. I’m talking about Section 230 of the Communications Decency Act of 1996. It created the fiction that because terrorist-criminals live stream murderous rampages on Facebook, the “social media” company isn’t responsible, accountable, or liable for the content it publishes. You won’t find such garbage on the sites of any of the news organizations I cited above (including Apple News) or on a broadcast network or cable channel. That’s because those news organizations curate what goes on them—and can be sued if what they publish harms someone.

Repeal this misguided legislation, and Facebook (and Google’s YouTube) absolutely will find a way to prevent their publishing platforms from being used for ill. Would it hurt their business models? Of course. What’s more important, entrepreneurial glory and wealth generation or protecting the integrity of democracy and keeping foul content from hurting people?”

I have been making this argument for over five years. Not holding Facebook, Twitter and YouTube and others accountable has become a great threat to democracy and has split countries apart. Traditional media has to come under regulated guidelines in order to stay in business. The kind of derogatory material you see posted on Facebook, Twitter, etc. could never be published in the New York Times, WSJ, and traditional media sites.

Section 230 of the Communications Decency Act has outlived its usefulness. You can argue that we are in this place today because of this Act. It came about at a time when the Internet was young, and in that sense, it has helped it grow exponentially.
But it has been a two-edged sword in that it allowed social media sites to flourish without any controls and liability. As Adam suggests, it is time to repeal this act and make these sites admit what they are-Media sites.

Of course, the lobbyists from Facebook, YouTube, and other social media companies will fight this tooth and nail as they have in the past. But I sense this time is different. They have targets on their back and governments around the world are looking at ways to reign in things like fake news, hate speech, etc. And the way to do it makes the sites that distribute this material liable for what is written and spread on their websites, just like traditional media.

Apple’s Service Offerings Could Drive New Profits to Apple’s Bottom Line

I have been going to Apple launch events for 38 years. Until this week, all have either been hardware or software related with a few service program announcements sprinkled in here and there.

Apple’s Showtime event last Monday was the first one that strictly focused on services by themselves and was designed to clarify how four new service products would aggregate content and allow them to get a piece of the action from the various subscriptions or financial transactions tied to these advanced service offerings. While the event focused on services, the underlining message was an economic growth that bodes well for Apple’s future.

One of the big issues PC vendors struggled with from the beginning of the PC industry, especially those who sell hardware, is that once they sell a PC or peripheral to a user, that is the end of the sale. The exception to this rule was in the printer industry which had a razor/razor blade model where the hardware became a vehicle to sell more ink as long as a person owned that printer. PC vendors tried various ways to provide add-on, recurring revenue models, but never hit on a true recipe for extending their sale beyond the initial purchase of a PC.

Apple perfected the recurring revenue model when it introduced iTunes. When the iPod first came out, you had to actually rip CD’s into the iPod to get your digital music. But once they got the music industry to back their music downloads, Apple delivered the first solid model for gaining additional revenue from a piece of hardware. This eventually begat streaming music subscriptions, and Apple has now added streaming music as a significant leg of their broader services offering.

When the iPhone came out, Apple launched its App Store, and with it, the iPhone revolutionized the smartphone world. The App Store gave Apple another leg in its services offering and delivered another service for aggregated content.

A few years back, Apple introduced Apple Pay. On Monday they added Apple Card, their first credit card, backed by Goldman Sachs. It is tied to Apple Pay, and they did away with late fees and penalties and added Daily Cash benefits that are real cash given to users when purchasing products via Apple Card.

Last year Apple bought Texture, the magazine aggregator that has now become the backbone of their News+ News service that combines magazines and newspaper subscriptions into a single $9.99 bundle. While they only announced a few big name newspapers like the WSJ and LA Times for now, if this service takes off, many other mainstream newspapers who have their own subscriptions services could eventually join Apple News+ some day.

The reason they will be watching this is that even with the success of the New York Times and Washington Posts current subscription program to date, should Apple gain a hundreds millions of News+ subscribers over the next two-three years, they would give these and other holdouts millions of new eyeballs to view their content. Even if Apple got 30-50% of the speculated fees the newspapers give up to be on Apple’s News+ platform, the additional revenue would be more than incremental and make them solid new revenue from people who would never subscribe to their papers alone. Also, it would allow them to drive up ad rates. The New York Times has 4 million paid subscribers today, and their ad rates are tied to these numbers. But let’s say that over the next two-to-three years, Apple gets 50-60 million subscribers to Apple News+ and could deliver that many new eyeballs to the New York Times. Would they not want to join this new service? You can bet that a lot of newspapers are going to be watching this space closely.

Apple also launched a new gaming service called Apple Arcade. They have not priced it yet, but when it comes out this fall, it will have 100 games made exclusively for Apple’s Arcade gaming subscription.

The Showtime focus of the event was the new Apple TV+ offering. Apple brought A-Listers from Hollywood to the event to talk about Apple’s new platform for creative storytelling and is clearly going to invest billions of dollars into creating original content. However, Apple TV and Apple TV+, which will have the original content, is also an aggregator of TV and movie content. While Apple did not announce pricing, which is the billion dollar question, the reality is that whatever they price it at it will be another subscription service that can deliver revenues to their bottom line.

Apple now has six unique offerings under services that have one common economic theme. Each gives Apple a cut of the revenue brought through these transactions or subscriptions. Apple’s services business was $37 billion in 2018 and on pace to be more than $40 billion in 2019. Apple’s hope is to make services a $50 billion a year business in 2020 and that goal seems on track. If anything, these services could help accelerate their business ambitions.

While a lot of people were critical of these announcements in that Apple did not spell out the price of their Apple Arcade and TV+ service, and left a lot of questions about their services open, you can expect them to be priced competitively. While not everyone one will subscribe to all of these six recurring revenue services, Apple is now giving users new choices of aggregated content that is economical and can meet the various needs of their overall customer base.

One other theme that is equally important about Monday’s event was Apple’s emphasis on the privacy that comes with each of Apple’s services. This is a major differentiator between Apple and some competitors. Yes, with Apple you pay for services and this iron clad privacy.

Apple also has another thing going for them, and Oprah Winfrey nailed it when she said: “Apple has a billion pockets y’all”. All Told Apple has over one billion active users of IOS and Mac OS devices to target for these new services. And Apple’s marketing programs are legendary. While these new services most likely will start with small audiences at first, they have the potential to grow and become a powerful engine to fuel Apple’s bottom line growth. The Mac, iPhone, and iPad will also be important products, and Apple will continue to innovate around these hardware devices. And I believe that Apple will have the next major industry disruptor once they bring out their AR glasses and build a base of AR apps to drive its usefulness and growth.

I think we will look back on Apple’s Showtime event as a very important milestone in Apple’s history. I believe it launched the era of services being a more powerful force in Apple’s revenue model and with these new services it brings to Apple’s customer’s many new choices to keep them in the Apple family.

A Philosophical Take on Post Show-Time Apple

Apple’s Show-Time event on Monday sure was different from the Apple events I have been attending for over ten years. A lot has been written about each service and predictions have been made on whether they will be successful, and, as you can expect, I have my thoughts on that. In this column, however, I want to address three challenges or opportunities I believe these new services bring to Apple.

The Responsibility of Choosing

Most users of Apple products are used to the company deciding for them: whether we are talking about the right time of giving up the headphone jack or when they should get 5G. On Monday, however, Apple brought up the concept of “curated content” several times. This means that Apple is choosing what content to bring to you. With Apple Arcade, Apple is working with developers to bring you exclusive games as part as a subscription service. With Apple TV+, Apple is working with producers and writers to bring you exclusive content. And with Apple News+, Apple is bringing you news and stories they think are either great content or are the kind of articles you want to read.

For gaming and video what Apple decides for me will not have a material impact other than on my decision to subscribe to the service based on the quality and relevance of the content. With news, however, the effect that Apple’s curation might have is pretty significant. I trust Apple to keep my viewing data secure and I trust they will not be malicious about the content they will present to me. This does not mean I necessarily trust I will see everything I want or need to see. Curation does not mean that those articles that Apple highlights for me are the only articles I can read. Chances are, however, that most consumers will start and stop at the articles presented to them, more out of convenience than anything else.

Apple has, therefore, a great responsibility in both hiring editors and training its algorithms. Editors should be able to follow guidelines on what makes for best content based on anything that can be easily measured and assessed leaving no room for subjectivity. Similarly, the data used to train their models should truly reflect the reader’s preferences,  habits, interests and other data such as location.

All the good intentions in the world would not get Apple easily off the hook if consumers ever felt they did not get the full story with Apple News+.

The Risk of not Fully Controlling the Experience

Something else that was different at Monday’s event was that Apple did not control all the storytelling on stage. While I am sure everybody spoke from a pre-approved script, it was not Apple’s voice we heard. A long list of celebrities told Apple’s content story. Letting someone else tell their story, somewhat lowered Apple’s level of control.

Apple has different levels of control on the new services as well. Apple does not create every app in its app store, but they make the rules of the game, so in a way, they still control the experience, and this will certainly be the case with Apple Arcade.

With Apple Card, Apple is in control of what the card looks like both digitally and physically and how the AI-driven spending report is shown on your phone. But Apple is not in control of the approval process or any other banking aspect. It will be very interesting to see how smoothly the sign-up experience will be for users who might not have the perfect credit score. Also, is the fact that I can iMessage for support mean I should expect a level of service I get from the genius bar or a banking chatbot? Even if these are trained bank employees, I am sad to say customer service is not necessarily a strength that comes to mind when I think of my interactions with my bank. Even Amazon, which offers a Rewards Card with very favorable terms, must deal with very unsatisfied customers complaining about Chase, the card issuer and not Amazon.

When it comes to the production of their video content, there have been reports about Apple trying to control the process too much rather than letting the talent they hired to do their job. It was clear on Monday that Apple is selective about the topics they want to address with their creations. Ultimately, however, I am subscribing to Apple TV+ because I want to watch Oprah’s content, I want to be able to hear her voice loud and clear not one that is compromised by Cupertino’s interference. So while Apple may decide the type of content they want to air based on what fits with their brand and goal, it should not try to control the overall experience, or it would fail to deliver on the very promise they made of empowering storytelling.

The High-Bar of the Straight and Narrow

Over the past few months, Tim Cook has been very vocal about privacy being a human right and security being a core focus of Apple. On Monday, Apple took on even more of a role as a bastion of humanity by wanting for its users way more than privacy. Quality news, reporting, and writing, content that inspires us to think and learn as well as better financial health was how Apple talked about the new services. Apple is no charity, so of course, there is revenue potential in all these areas. To unlock such potential, Apple could have presented these services calling out ease of use, breadth, fun but they were very deliberate in what they highlighted. What Apple wants to deliver with these services reflects how I believe Tim Cook has earned respect and admiration across the company: morals and ethics, the legacy he wants to leave as a leader.

If you buy into what Apple is selling with these services, you are likely to keep Apple at a very high standard, and you will scrutinize their behavior more than you would that of a company for which you have low expectations, Facebook comes to mind.

 

The Apple I saw on stage at the Steve Job’s Theater was not a different company. I saw Apple’s core values and beliefs, its focus on elegance, ease of use, quality vs. quantity, and the power of a large installed base of users. I also saw a company that is entering new territory, at times with confidence and at times knowing they do not know it all and they cannot do it alone. And this is a good thing!

Apple’s Hope to Build a Story Telling Platform

Before digging into the actual products, or product news themselves, I think it is important to take a step back and look at the forest within the trees from Apple’s Showtime Event yesterday. I think a theme was clear, and how that theme ties into Apple’s broader media services.

Golden Age of Stories
In a theme report released by the MPAA (Motion Picture Association of America), the opening letter from their CEO contained this interesting first paragraph.

We live in a golden age of stories. In communities of all sizes, in all sports of the world, stories bring us together, challenge our assumptions, and inspire us in so many ways. — MPAA CEO and Chairman, Charles H. Rivken.

This is the thing that drives a service like HBO, Showtime, and even Netflix. This is the idea behind motion pictures, good journalism, much literature, and even video games. Those who tell the best stories tend to have the most successful hits in media. This is why I personally think Netflix is extremely interesting and why I have positioned them as a company creating stories as a service. Out of all the content destinations I watch, which is most of them, I feel Netflix consistently puts out the best stories. It is part of their content strategy which focuses on a narrative with a whole season or show having to be followed from the start. Most cable TV network shows have a little narrative but are largely produced so you can miss an episode or several and not miss much. Nearly every series on Netflix, and Amazon to a degree are more like a 10 episode movie.

This is why I think Charles H. Rivken is correct when he says we are in the golden age of storytelling. The Internet and the billions of screens in people’s pockets make it possible for great stories to see the light of day. So how does Apple fit into this?

The Story Telling Platform
While Apple is embarking on its own journey of proprietary storytelling with AppleTV+, the broader perhaps more interesting theme is Apple trying to create a platform for storytelling. If you look at the focus of the games, they are bringing to Apple Arcade, and they are mostly indie game developers who create immersive and cinematic gameplay that also tell a story. Perhaps not all the games included in Apple Arcade will be this way, but Apple went out to their way to showcase developers who do and highlight the storytelling potential of many iOS games.

Second, we have magazines. While I’m not a huge magazine fan, I do recognize they often tell stories in a much different way than news publications for example. Magazines often have more depth in their articles, more production, and a more narrative style of writing. Perhaps that is because their articles can be longer than most news articles, but overall, it is a different type of storytelling but storytelling none-the-less.

Lastly, we have AppleTV+. This was perhaps the most obvious push toward stories of all the announcements. Mostly because many writers, producers, and actors/actresses, were there to promote the stories they wanted to tell. Apple happens to be the platform they choose, mostly because Apple gave them the most money, but I think part of Apple’s pitch was the overall engagement and type of customer that Apple acquires. While Apple can and will keep paying for this content, I do think part of them hopes that the impact or the results of these stories being told on Apple’s platform has great impact and perhaps brings more storytellers to their doorstep.

Why Now?
If we look at all of this news in context, I think the why now to announce this becomes a bit more clear. Many were disappointed the details of availability and pricing were not immediately made clear and question why to have this event now. The reason is more platform-centric in my opinion and not that different than what gets announced at WWDC from a strategic viewpoint. Apple wants storytellers to know what they are doing and to hopefully buy into their vision and sign on to have content ready by launch. Apple wants to go out with a bang, and they have to really, and in order to do that they need to start seeding their vision sooner than later.

That’s essentially what yesterday was about. Seeding the vision to content producers about the storytelling platform and storytelling services they want to emphasize. With the added benefit of Oprah’s highlight quote as to why she has committed “because they are in a billion pockets y’all.”

Apple customer base is unique, and its platform is unique. This uniqueness has worked for third-party developers, and Apple wants it to work for third party storytellers in gaming, journalism, and movies/tv.

There are still a lot of questions, and details yet to emerge. We can and will form an opinion on the services themselves as they come out, but I think it’s important to understand the platform play Apple is pushing because in the end that is more in their wheelhouse.

Apple Card Highlights Disruption Potential for Tech Industry

Well, that was a surprise…not in terms of what Apple announced at their big services event—which frankly had few, if any surprises—but about what I was most excited about afterwards.

In part because there were so few details about the pricing and extent of their highly expected streaming TV service, the thing I walked away from the Steve Jobs Theater being the most impressed with was—wait for it—Apple Card.

Yes, a physical credit card, but more importantly, the virtual credit card services that are the latest extension to Apple Pay. In multiple ways, Apple Card is a great expression of what Apple does best. They find things that create pain points in our lives and make them much simpler by essentially reinventing them and how we interact with them.

Of course, there was hope that they would do the same with TV content and services too. But, while it’s too early to say for sure, my initial take is that they weren’t quite as successful there. Sure, the new Apple TV+ app looks nice and integration with a number of partners looks good, but there are several TV content aggregators already out there. Plus, to get the kind of comprehensive package Apple promised, you have to get local news and sports from one source, movies from iTunes and/or other streaming services, and then Apple’s content as well. Oh, and if you’re a fan of content from Netflix or if you like to watch TV content on Android devices or Windows PCs, you’re out of luck. That’s a lot of work and compromises for what’s supposed to be a complete service that works across “all your screens.”

On top of that, though it went by very quickly, I could’ve sworn I briefly saw a $10.99 monthly price for the ShowTime service that was added during the demo—not the $9.99 that had been expected. So, lots of pricing questions still remain—not the least of which is how much Apple will charge for their own original content.

In the case of Apple Card, however, the value proposition was much more straightforward, and the end product much more appealing as a result. Not only did Apple promise to rid people of dreaded fees associated with credit cards—late fees, international charge fees, etc.—they also completely rethought the experience of using a credit card. The software-based app experiences enabled via Apple Card range from leveraging on-device machine learning to easily label charges, to providing practical insights on how to reduce interest charges, to logical organization of charges and trends in spending. They also integrated strong privacy and security by requiring biometric recognition for all charges (via face or fingerprint recognition), tying transactions to the secure ID inside the iPhone, and keeping everything anonymized to prevent tracking of purchases. All told, it was the best example of the old Think Different Apple philosophy that I’ve seen in some time.

Not only that, but Apple even managed to sneak in a tiny bit of hardware into Apple Card via the titanium physical credit cards that they’re making part of the offering for those locations that don’t take Apple Pay. While you can argue it was a bit over the top, it was classic Apple in the best way, with even the tiniest details done right. (Or, as I joked with others, more proof that Apple still does hardware best….) The cards have no number or expiration date—just your named etched into them. Frankly, if I was part of another credit card company, I’d be nervous, because Apple completely reset the bar on what people are going to start expecting from a credit card.

In that regard, the Apple Card launch was also a great example of another broader phenomenon that we’re starting to see play out, something I’ll call “reverse transformation.” As big companies in other non-tech industries are going through what many like to call “digital transformation” and morphing into tech-driven organizations, tech companies are starting to look the other direction. They’re observing these trends in traditional industries and recognizing that those industries are now even more ripe for disruption than ever. By applying their “digital” expertise towards problems that have plagued or limited those traditional industries, and applying a fresh perspective to those issues, tech companies are becoming very real threats in industries that, just a few years back, seemed very far removed from tech. In effect, they’re taking advantage of an opportunity that didn’t exist until these other industries started to digitize themselves.

To be clear, not all industries are necessarily subject to this reverse transformation, nor are all tech companies equipped to take advantage of these new potential opportunities. But as Apple is starting to demonstrate with Apple Card, the possibility of digital transformations creating more risks for traditional industries and companies than many originally thought are very real. These types of changes won’t happen overnight, nor will Apple Card immediately disrupt the entire credit card industry, but the possibility of dramatically new tech industry-inspired services are clearly an interesting new possibility for us all to consider.

Podcast: Nvidia GTC, Google Stadia, HP and Oculus VR, Apple

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing the announcements from Nvidia’s GPU Technology Conference event, discussing the potential impact of Google’s Stadia cloud-based game streaming service, talking about new VR headsets from HP and Oculus, and chatting about Apple’s new iPad, iMac and AirPod announcements.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Apple’s Newsy Week

This has been a fascinating week for Apple, and one that may signal a bit of a change in how Apple releases products and holds media events. What we saw transpire this week for Apple is noteworthy, and I think very smartly, should this be the new normal. In case you missed it, Apple made headlines almost every day this week. During the past week, Apple made meaningful updates to the iMac line, iPad line, and AirPods. In years past, Apple would have held a media event just to make these announcements. Instead, they were updated via press releases and some clever Twitter posts from Apple CEO Tim Cook.

When Apple announced its March 25th media event a few weeks ago, I made the point that this would be the first event where the focus would not be on hardware. I acknowledged there would likely be some hardware announcements but they would not where the event’s emphasis would be. Little did I know how right I was! With Apple making their hardware announcements, before the media event, the entirety of Monday’s show looks to be on Apple’s newest services initiatives. The only holdout that could show up now is AirPower and perhaps an Apple TV, but I wouldn’t be surprised if it is still released at a later date.

If this does represent a new shift in strategy for Apple product announcements, then I think it is worth looking at the upside of this new strategy.

Apple Controls a Longer News Cycle
It has long been true of Apple that they drive headlines. Whenever Apple makes news, it generally controls the news cycle. No company dares try to make news anywhere near an Apple event or anticipated event or news cycle. This is also why Apple does some clever news bits around big conference shows, sometime CES, sometimes MWC, even around other competitors product launch events. Because they know their news will get some attention and take some attention away from others. I’ve been in this industry for more than 20 years now, 18 as an analyst, and as an outside observer, I’ve never seen any other company be able to control a news cycle like Apple.

With that context, now imagine if the norm for Apple is a product release storm like we saw this week. Contrast that with just one media event to launch these products and Apple gets a day, maybe two, of controlling the news cycle. By rolling out almost a weeks worth of announcements spread out over the whole week, Apple is in a position to control the news cycle for weeks now instead of days. I find this fascinating from a marketing and PR strategy. We have never seen Apple do anything like this before, and now looking back at the week it is easy to see how Apple controlled the broader media narrative and was a constant part of the media conversation all week.

Which, if more and more product updates from Apple are iterative in nature, they can get a lot of life from a weeks burst of announcements than they could from an event where nothing eye-popping is announced. Which leads to my next observation about a more iterative Apple going forward.

Apple and Iteration
It’s first worth mentioning that iteration can be innovative. I think too many people, including many execs and VCs I spend time with, confuse innovation with invention. Something doesn’t have to be a brand new creation to be innovative. Products get new features, functions and in general, become better and more usable. It’s the entire experience that sums up innovation not one whiz-bang feature.

In thinking about Apple’s hardware iterations, I’m reminded of a tweet from my friend Benedict Evans who is a partner at Andreessen Horowitz. He made this point last week well before Apple’s hardware announcements, and while the point is about software, I feel it also applies to hardware.

Just swap, the word hardware for software and the point remains, and I think is critically important. Consider this, ~85% of Apple’s customer base is not the hard-core elite techie who lives and breaths tech. They are school teachers, construction workers, stay at home mom’s or dad’s, grandmas and grandpas, students, farmers, chefs, police, firemen or women, doctors, pilots, you see my point. These are people whose lives do not revolve around the latest and greatest tech gadgets but who simply want technology that works and gets out of the way. For this, the mainstream consumer, iteration is what the Dr. ordered. Sure they like new functionality but only when it makes their lives less complicated not more complicated. Generally, iteration is exactly the continued journey toward eliminating complexity.

So while a more iterative Apple, which is exactly what we should expect for the next few years at least, is not the sexiest or interesting to the 10%. It is much more interesting and more useful to the 85-90% of Apple’s customer base who will appreciate and value consistent iterative improvements over the invention of the next big thing.

This is Apple in postmaturity, and this is what post mature hardware cycles look like. Services fit into this model nicely, and we will tackle how next week after we see what Apple has up its sleeve on Monday.

It’s Time for Tech to Tackle ‘Unintended Consequences’

Mark Zuckerberg did not set out for Facebook to be a platform for Russian hackers, or for Facebook Live to be used a broadcast tool for killers. Steve Jobs did not intend to cause screen addiction and some of the societal ills that have come from that. Travis Kalanick might have wanted to disrupt the taxi industry (which was sort of ripe for it), but he probably didn’t intend for Uber and his TNC brethren to be a cause of both an alarming rise in traffic congestion and a decline in investment in public transportation. Brian Chesky didn’t intend for AirBnB to become a vehicle for property investment companies, driving a surge in rental prices in some cities. And if you’d asked Jack Dorsey whether he thought a U.S. president would conduct nuclear diplomacy or fire his senior staff over Twitter, he might have said, ‘umm…don’t think so’. But this is where we are.

So while last year there was a lot of focus on data privacy and security issues, leading to GDPR and the beginnings of some change at Facebook and others, I think we do need to start a broader conversation about ‘unintended consequences’.  Here, I’d like to focus on a couple of particular examples of companies whose current avenue of growth seems to have strayed from the original mission, causing some broader societal repercussions. Put another way, I’d like to see a little more questioning along the lines of  “Yes, We Can Do This, But Is It A Good Idea?”

Uber/TNC. In their first phase, TNC companies were successful because they were a ‘better version of a taxi’, leveraging the power of software and a smartphone. There was little love lost for the taxi companies (though sympathy for individual drivers), and the ‘medallion’ structure in some cities that concentrated money and power in the hands of a select few. But the rapid (and largely unregulated) growth of TNCs has caused a marked increase in traffic congestion in many cities. TNCs have also induced a reduction in transportation ridership…creating a vicious circle of higher fares and lower investment in infrastructure. And along have come corollary services, such as Uber Eats, having a further impact on congestion (plus more double parking and other such behavior). At some point, have any of these companies asked the question: “Is it really a good idea to have all these cars out there just to deliver a cup of coffee or a sandwich?  What is the impact of this on a city’s transportation network?

AirBnB. The idea was initially welcomed as a complement to the prevailing lodging industry. Want to occasionally rent out your away-at-college kid’s bedroom, or your house for a week while on vacation? That phase of AirBnB was great…while it lasted. And, they used the power of the platform to create related, and valuable experiences such as AirBnB Experiences – pasta making in Florence, a history tour of New Orleans.

Unfortunately, however, the corporados have corrupted the AirBnB model. Investors and developers have bought millions of properties, with the sole purpose of turning them into temporary rental apartments. As has been well documented, neighborhoods have been altered, and rents have been driven up in many cities. Using one of the travel sites like Expedia today is a totally different experience, as one is presented with a mélange of accommodation options, including AirBnB-type apartments.  It’s very difficult to tell who’s who or what’s what. And now, predictably, regulators are stepping in, sometimes with draconian (and very uneven) measures. But along its path to Super Unicorn status (or whatever you call a private company valued at $30 billion), AirBnB grew unchecked, with almost no self-policing, not really asking, as vast swaths of New York and Miami and Paris were being transformed, “Is this a good idea? How are young people going to be able to afford to live in cities now?”

Amazon. A great company, in so many ways. The bets on AWS. On Amazon Prime. The creation of wonderful content. A ruthlessly efficient machine. And, their platform has enabled success for millions of small businesses. But, with a perhaps unintended consequence of having hollowed out physical retail. And there’s the larger question of the path that Amazon is on, as Recode’s Jason Del Rey articulated in a recent article, Amazon wants to sell “every genuine product in the world.” That’s a mistake.  To quote Amazon’s SVP Dharmesh Mehta, Amazon wants to sell  “every genuine product in the world.”, which, Del Rey points out, has “created all sorts of openings for what Amazon’s trust team calls ‘bad actors’” – counterfeit goods, fake reviews, etc. “It’s a fork in the road moment for Amazon, as its ambitions to grow even larger, there’s greater risk of ‘eroding customer trust’”.  Here’s a company that controls nearly 50% of on-line retail spend. Consider what a $300 billion company has to do in order to just maintain growth. It might be time to be asking what the longer-term implications of all this are.

Facebook Live. In an ideal world, Facebook Live might be a great idea – “a fun, engaging way to connect with your followers and grow your audience”, as Facebook says on its website. But, there’s a certain percentage of bad actors who are going to use this type of platform to broadcast inappropriate content or for other unsuitable purposes, as happened last week in New Zealand in the most extreme of examples. There is no way, regardless of how many people monitor the site or how much AI is thrown at it, that Facebook will be able to prevent other, similar exploitations of its platform. Even YouTube is different, because content is uploaded and it is easier to monitor – and therefore prevent/take down.  There might be some ways to de-risk Facebook Live (and other, similar concepts). But in its current incarnation, the risk of bad actors, even if an extremely small minority, makes the case for Facebook Live too perilous, in my view.

We are clearly at a point where there is some revisionist thinking going on about the impact of tech. And the risks multiply when AI is thrown into the mix. As part of this thinking, I suggest we expand our consideration to include long-term societal consequences: “Just because we can, is it a good idea”?

Gaming Looks to Shift to the Clouds

The gaming market is one of, if not the market segment that is seeing some of the biggest overall changes. From a segment viewpoint, gaming is one of the hottest categories around in both software and hardware. It is easily the biggest bright spot from a hardware standpoint with gaming hardware and accessory companies seeing continued strong growth and retailers continuing to be happy with the gaming segment performance compared to all other categories. Interestingly, we are still only in the early stages of major changes to the category as the market is continuing to grow and bring in new consumers, innovative new software, and business model changes.

Shift to the Clouds
The idea of streaming a video game has been an industry promise for many years. Some early pioneers, like OnLive, came and went but showcased the promise of cloud gaming, or Gaming as a Service (Gaas), but also highlighted the limits of the technology at the time. The single biggest challenge with streaming visually intense games from the cloud is latency. Which is not a significant issue for a casual, single-player focused game, but it is a problem with multiplayer games. Any latency whatsoever can be the difference between life and death in a competitive multiplayer game.

That was then, and the backend datacenter hardware, as well as household broadband speeds and wireless network speeds, have all increased. Cloud gaming is now more feasible than ever, and many companies are jumping on the opportunity. Just looking at the western markets the short list of cloud gaming offerings is as follows: NVIDIA, Sony, Microsoft, EA, Valve, and now Google. The list in China is even longer. Prices of these services range from $4.99 a month to $19.99 a month, but the value of having access to a large catalog of games to play without the need of dedicated gaming hardware has significant value.

The biggest challenge facing Gaming as a Service is publisher support with the newest and most critical game titles. As of now, many publishers are “windowing” availability of their biggest titles and either not bringing them to a streaming service or doing so more than a year later. Currently, PlayStation Now has the largest category of games at 500 total but also has the highest price of $19.99 a month.

A key thing enabled by cloud gaming is the elimination of dedicated gaming hardware and in particular an embrace of cross-platform. Many gaming services like PlayStation Now, NVIDIA’s GeForce Now, let consumers play games on PC and Mac and sometimes iOS and Android as well. This, to me, is perhaps one of the most significant signals of what is to come in the future.

Play Anywhere, Any Time, With Anyone
I’ve written about how the shift to the cloud will break down walls which have formerly existed in the gaming industry. To be clear, the walls are starting to crack, but there is still a lot of work to do to break them down completely. For example, Google’s latest cloud offering released this week, only works on their Chrome browser. I’m less optimistic about gaming as service solutions that have walls than ones like Sony’s and NVIDIA’s offerings which understand the cross-platform reality that exist for most consumers. I’m hoping Microsoft’s upcoming solution also embraces cross-platform and a world without walls because it’s better for consumers and the gaming industry.

I keep using Fortnite as the example because the behaviors we have seen in the market make it clear Fortnite’s model is the future. The biggest change that Fortnite brought was the ability to play with anyone regardless of what hardware or platform they were playing on. For the first time, a global audience could play on any hardware and play together. This is the future, and there is no going back. Cloud plays a role in this because Fortnite is truly a cloud platform, but it leverages the local hardware resources and provides better gameplay when on better hardware. The choice is up to the player. If they want to be hyper-competitive, then they can invest in better hardware. If they just want to play casually and have fun, then a browser or mobile device is sufficient.

There is significant value for consumers but also to publishers to every go back from this model. With the Game Developer Conference this week I had a chance to talk with some of my contacts at big game publishers of AAA titles, and they all agree this is where things are heading. They also acknowledge it will take time. The key here is publishers can now truly look at the entire connected world as their addressable market with a much easier engineering process. To date, publishers go through a lot of work to develop titles that work only on specific platforms. They have to pour engineering into making a game for XBOX, Playstation, Nintendo, PC, etc. Each has unique benefits and costs massive resources to try and cover the whole market. As gaming moves to the cloud, we get closer to a write-once work anywhere solution for publishers. Economically, there is more upside with the benefit of being better for consumers as well.

As I said, we are early days but I’m convinced a combination of Gaming as a Service solution, with true cross-platform multi-player will open up the floodgates for the gaming market and expand the category to new heights.

There is Still a Place in the Market for the iPad mini

It would seem counter-intuitive to think that there is still a place for the iPad mini when we have iPhones that have a screen as big as 6.5 inches. Yet, the iPad mini retains a loyal fan base thanks to its compact form factor. Since the arrival of the iPad mini back in 2012, some industry commentators tried to position it as the entry-level iPad, the first step into the product family that eventually grew to include the iPad Pro. Apple’s positioning and pricing made it clear, however, that the only thing that was “less” about the mini was its size. This latest update stays true to the successful formula adding Retina display and the A12 Bionic chip with Neural Engine to the iPad mini basically giving it the same brain as the iPad Pro!

When the first iPad mini came out I tried it but then decided the larger size better fitted my workflow. The iPad for me has always been more than content consumption. This is especially true today with the 12.9” iPad Pro that I usually alternate with the Surface Pro as my primary computer when I travel, which is often. I had the opportunity to test the new iPad mini and it became obvious to me, very quickly, that the iPad mini is the perfect companion to my iPad Pro. Reading and taking notes in particular, really brought out the big advantage of the iPad mini form factor.

I can touch type on glass, which is my preferred way to take notes while in meetings because I do not like to have a barrier between myself and the other participants. The new Smart Keyboard design no longer allows for the wedge fold making typing on the glass more challenging. This meant that I started using Pencil more and given my appreciation for the Samsung Galaxy Note I went as far as writing about Why Apple should add Pencil Support for iPhone. The new iPad mini is probably as close as I am going to get to that dream for a while. The new Moleskine note-taking app called Flow that I had the opportunity to try, reminded me of writing on one of their notepads which were a crucial part of my travel equipment for so many years.

It is a shame that Apple did not add support for Pencil 2 for iPad mini. I do understand that the iPad mini cannot charge it and that Pencil pairs through the lightning port, but I am guessing that there would be a software workaround to the pairing and that you could have charged through your iPad Pro. This would have meant carrying only one Pencil rather than two.

Reading books is also great, going from reading on an iPad Pro 12.9” to an iPad Mini is like going from reading a large textbook to reading a paperback. You can do it for longer and more comfortably while maintaining the crisp screen quality.

I also had the opportunity to see the new Angry Birds AR in action, and I can see how the game will become as addictive as the original, if not more. The iPad mini gives you a larger field of view compared to an iPhone but retains high mobile making it very well suited for AR.

Of course, as a screen for video content, the iPad mini can also do the trick thanks to the updated Retina display. This makes it a timely upgrade for those consumers interested in the upcoming Apple video service. It will be interesting to see if Apple will run any special promotions bundling the new iPads both the iPad mini and the iPad Air with the video service. What is interesting to me, however, is that while Apple might be keen on providing more “screens” for consumers to enjoy the upcoming video streaming services, they are not sacrificing hardware value by lowering its price.

The iPad Portfolio

I read many comments that referred to the iPad line up as confusing, calling out the products for having a lot of overlap. The reality is that there are several ways to look at the new portfolio. You can look at all the iPads as one family. You can see the iPad Pro models as a continuation of the Mac line. You can also see the iPad (2018) as an education first device and the rest of the iPad portfolio as one. Finally, you could see the iPad mini as much of a standalone as the iPad Pro models. The bottom line is, whichever way you look at it, the products make perfect sense to address a potential market of users that go beyond the iPhone installed base. There are, in fact, many Android phone users have iPads making the addressable market for iPad particularly diverse.

The other important point to make, especially as people comment on the iPad mini pricing, is that there have always been cheaper tablets. Maybe there are less today than in the 2010-2012 period just because vendors could not sustain to be as aggressive while keeping up with the iPad’s feature sets. But, the price of an iPad, even more so than that of an iPhone, has a lot to do with the ecosystem of apps that are available. Apps that can turn the iPad hardware, that some in the Android camp could attempt to replicate, into a versatile computing platform that marries entertainment and productivity very well.

The Future

Apple kept the bezel-less design and Face ID exclusive to the iPad Pro family, but as those devices get new features, I do expect to see the bezel-less design trickle down into the portfolio. The lack of bezel on an iPad mini could potentially have it replace the 9.7” with the rest of the portfolio set on 10.5” and 12.9”. This would continue to please those users who love the smaller footprint while opening up the mini to a broader market.

The timing of these upgrades is no coincidence. Clearly, Apple is building a portfolio that has the broadest possible appeal in time for its video service launch. But to think that these devices have been created just for that is certainly a mistake. While the tablet market is dead the iPad market is alive and well and these new devices widen the opportunity while the rumored video and news services will certainly grow engagement even more.

Exploring Apple’s Role in News Literacy

I grew up in the age of Walter Cronkite and other very trusted news broadcasters. My family turned into Cronkite, affectionally known as “Uncle Walter” to many of us, every night, to hear CBS’ nightly news at 6:00 PM.

As a nation, these news broadcasts were critical to keeping us informed of world and national news. In my case, it conditioned me from an early age to trust these news reporters to dispense accurate and impartial news reports. The concept that they would even consider reporting on something that was not vetted and checked out in detail never even entered my mind. In fact, until the last 15-20 years or so, these national broadcasts were gospel to many of us “boomers” as CBS, ABC and NBC news were pragmatic and mostly on the money.

But since social media entered the scene and became a free platform for anything goes information, they have become mediums to dispense fake or false news stories as well as legitimate ones.

In a recent column I co-authored with Mark Sullivan at Fast Company, we asked the question if people can detect real from false news today and focused on a key program like the News Literacy Project that teaches our kids to tell the difference between true and fake news.

Here is an excerpt of what we wrote about the News Literacy Project:

TEACHING KIDS TO THINK LIKE GOOD REPORTERS

The News Literacy Project was founded in 2008 by Pulitzer Prize-winning Los Angeles Times reporter Alan Miller as a middle and high school classroom project in Washington, D.C.; New York City; and Chicago. Its lessons and materials are apolitical, created with input from real journalists. It teaches students how to recognize the earmarks of quality journalism and credible information, and how to know if articles are accurate and appropriately sourced. It teaches kids to categorize information, make and critique news judgments, detect and dissect viral rumors, interpret and apply the First Amendment, and recognize confirmation bias.

In one exercise, students are placed into the shoes of a young news reporter covering a breaking story. They’re asked to interview experts and eyewitnesses and review other material, then build a story piece-by-piece through multiple choice questions. They’re asked, in a sense, to think like a good journalist.

This era of social media and fake news has caused great confusion to many and has impacted how our younger generation views news. I consider the News Literacy Project and other programs that teach kids the fundamentals of how to discern real news from fake news a critical, foundational skill needed to navigate their future.

The News Literacy Project got quite a boost this week when Apple announced that they were backing this great program. This is a big deal and one that will make it possible for this educational program to reach many more young people.

Highly Respected retired journalist, Walt Mossberg, who sits on their board, tweeted this response to Apple’s investment in the News Literacy Project.

I recently spent some time with a group of young people and asked them where they got their news. Not surprisingly, most did not get it in the more traditional way via the TV or radio. Many got news if they were even looking for it, on Facebook and Twitter or other social media sites.

When I asked about how they determined if the news they were reading was real or fake, their answers were fuzzy and all over the place. A lot of them read the news without even questioning it. Some said that even if they did question what they read, they did not do much to find out how accurate it was.

The role of educating our youth and helping them understand how to read news with a critical eye is a critical mission. Apple’s investment is a significant step in helping the News Literacy Project and the two other programs Apple will also invest in to achieve this goal.

NVIDIA Deepens Lead in AI and Autonomy

There are some semiconductor companies that I feel are more susceptible to disruption. NVIDIA isn’t one of them. There are numerous reasons why I say this, but the main one is focus and R&D. NVIDIA has been the use case I use when talking with the industry when I give a counterexample to first party semiconductor initiatives of companies like Google and Amazon. While I fully believe it makes sense for some companies to make some of their semiconductor/data center hardware there are many cases where this isn’t a good idea.

NVIDIA and specifically their efforts and investment focus have generally been my counterpoint to the trend to verticalize. The reality is, Google and Amazon, as examples, will never beat NVIDIA in specific semiconductor applications and supporting software. NVIDIA invests massive amounts R&D on hardware and software innovation focused on AI. Companies looking to build some of their own data center hardware focused on AI/deep learning are only investing a fraction of the R&D NVIDIA is which begs the question of whether those companies can do as good of a job in these core applications. More specifically, is it worth the many millions of dollars in investment–which may or may not pay off–when it may be easier and wiser to buy the superior tech from NVIDIA and call it a day.

This is going to be an interesting story to watch when it comes to the data center specifically. Both Intel and NVIDIA are market share leaders in the data center but the biggest cloud platform providers like Amazon, Microsoft, and Google, want to create some of their own silicon to differentiate their platforms and create some lock-in for customers. But, I maintain, NVIDIA may still out-innovate them in core applications, and if those big firms who want to verticalize to a degree can’t offer competing solutions with a competitor who is all in with NVIDIA, then they risk losing sales. It is a fascinating dynamic and one that will be interesting to watch.

AI and Autonomy
Besides the companies I mentioned above, the other example I think is interesting is Tesla. Perhaps using Tesla as an example here will bring the points above I made into more clarity. Tesla was using NVIDIA technology in their vehicles for a variety of things including many of the autonomous features. Tesla has since decided they want to start making their own silicon for autonomous solutions. While I remain extremely skeptical about Tesla’s solution is better than NVIDIA’s, from an end-to-end standpoint, the risk for Tesla is if they can’t compete with other car companies who do go all in with NVIDIA.

Case in point, at their annual GTC conference, NVIDIA announced a partnership with Toyota Research to accelerate the use of self-driving cars. As NVIDIA gets closer with automotive companies, it becomes more likely those companies will use core NVIDIA IP. Moreover, I still have not seen a more complete self-driving solution than the NVIDIA Drive AGX. NVIDIA’s continued investments in total end-to-end self-driving systems continue to put them ahead of others and make me think NVIDIA is well positioned to capture share in the ~1,000 dollars of estimated semiconductor content in each autonomous vehicle. Another way to look at the economic upside of autonomy are the forecasts which estimate the semiconductor opportunity for autonomous vehicles to be ~$60 billion in 2025.

From talks I have had with both automotive industry insiders and component providers, it seems all the major automotive companies are making decisions on where to place some bets and what to own vs. what to buy. Their concern is as the car becomes a computer they risk losing the control of the customer experience they feel they need. This is the one area where I feel their approach the future like old companies not like new ones and a primary reason Tesla could be so disruptive.

When it comes to autonomous solutions, my hope is the car companies make wise decisions and partner with the tech companies who are making massive investments to turn cars into computing platforms. I have little doubt these companies can do it themselves, and the safety concerns alone, which NVIDIA is also addressing, are areas where a wrong decision or the wrong bet on technology can mean life or death for a car company.

There is more to say on this, and I have a number of private research reports on the future of autonomous cars that I’ll summarize in a sector report for subscribers in the coming months.