Throw it out and see what sticks

From the June 22 Wall Street Journal:

In the first 24 hours of the launch of an electric-scooter pilot program in the city of Hoboken, N.J., the local police department received more than 1,500 complaints and comments about the scooters, its police chief said.

Since the May 20 launch, a steady stream of complaints has rolled into the Hoboken Police Department. During that time, the department has also taken nine reports on collisions with scooters into parked cars and pedestrians, the worst of which occurred when an 11-year-old rider struck a pedestrian, who needed stitches.

“The number of issues about e-scooters has matched all other traffic complaints for the year, and this is only in a month,” Hoboken Police Chief Ken Ferrante said in an interview Thursday.

This is the physical manifestation of all that’s wrong with many of the tech companies, notably Uber, Facebook and Google: throw out a new innovation without preparing for or caring about the consequences.

While the new idea might be exciting, novel and even beneficial, these companies seem unable to understand or are too lazy to worry about the unintended consequences. Instead of thinking like a chess player, looking at several moves ahead, they’re like kids playing marbles.

The typical retort is that they never imagined how their technology might be used in ways they never intended:

“We never imagined that people could be injured riding scooters among pedestrians without a helmet, that scooters would be left anywhere blocking doorways or sidewalks.”

“We never imagined how targeting advertising could be used to target fake news to voters by adversarial countries during an election.”

“We never expected our data to be shared or compromised.”

“We never imagined that pedophiles would be going after kids watching cartoons on You Tube.”

“We never expected an Uber driver to attack a passenger.”

But that argument, being used by all of these companies, is wearing very thin after so many miscues. For Facebook, in particular, Zuckerberg’s, Sandberg’s, and the company’s reputation have plummeted. Yet the company is so big and profitable, they just don’t care.

When companies evaluate a new product or feature, they do it on a basis of its profitability. Will it bring in more revenue for the cost to implement? But if they never consider the cost to clean up, filter out the trolls, do better screening, and maintain a healthy environment, their profitability calculations will be all wrong. In many cases the cleanup costs, whether it be scooters, YouTube videos or Facebook feeds, are substantial. So substantial that they resist doing what’s needed because it makes their initial evaluation way out of whack.

And when they do react, they outsource the cleanup, as in the case of Facebook, to make it less visible to their employees who created the mess to begin with. They eliminate the feedback loop that will prevent it from happening all over again. They’re motivated to keep repeating these mistakes to keep their investors happy and keep their stock price up.

This is not how products used to be evaluated when consequences were taken much more seriously and how they are still done in other sectors of our economy. Hardware products have a cost to manufacturer, to test, and to market, but they also have a cost of warranty, ongoing engineering, replacing defects, and customer support. The ROI is more predictable. But with these new companies, caution and due diligence seems to be an afterthought. Their mantra is to throw out the product to and see how well it works and worry about fixing it later. But too often they never get around to the latter because their on to their next new thing.

The proliferation of scooters in cities around the world is just another demonstration, albeit more visible, of these tech companies’ uncaring and selfish approach.

What’s disturbing is that with these companies’ financial successes and ability to get away with so much, it’s likely inspiring others to follow the same behavior. Witness Boing’s approach to the 737 Max. They believed that they, too, could “throw out” their new plane with a design that had serious flaws without proper training, testing and an understanding of the consequences. And we know how well that worked.

Podcast: HPE Discover, Facebook Libra and Content Monitoring, Google Tablet

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the announcements from HPE Discover show, discussing the potential impact of Facebook’s Libra cryptocurrency and a recent article on content monitoring concerns at a Facebook contractor, and debating Google’s decision not to release additional Google Pixel-branded tablets.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Oculus Quest: Amazing Hardware, But Still Missing Mainstream Killer App

I’ve been testing the new Oculus Quest virtual reality (VR) headset from Facebook, and I’m quite impressed by this piece of hardware. Facebook took what it learned from shipping its earlier products—the high-powered, PC-tethered Oculus Rift and the less-expensive, standalone Oculus Go—and put together a $399 standalone headset that offers a very good VR experience. It’s not perfect, but from a hardware perspective, it ticks off many of the boxes needed to help move VR toward a mainstream audience. Unfortunately, the Quest is still missing a vital piece of the puzzle needed to bring in the average consumer: mainstream content or a must-have app.

Masterful Hardware Combo

I can’t overstate this: The Quest is a truly impressive piece of hardware. Facebook has pulled tougher some of the best technologies available, within the limitations of its price point, and produced a very solid product that never feels like a compromise.

At the heart of the headset is Qualcomm’s 835 Snapdragon processor, which brings impressive mobile-computing power within a reasonable power envelope. Facebook leverages the chip to drive an OLED display that offers 1440 by 1600 resolution per eye, with a 72GHz refresh rate. The Quest utilizes inside-out tracking, which means you don’t have to set up external sensors to track the headset and the handheld touch controllers.

Setup is straightforward: Using the Oculus smartphone app, you connect the headset to your home WiFi, download any updates, connect the controllers, and begin. The process is fast, easy, and clever, teaching you what you need to know about the system while you map your space and have a little fun. I was consistently impressed by the Quest’s tracking capabilities (Facebook calls it Insight Tracking), which held up across a wide range of uses. I also like the integrated audio, which channels sound toward your ears without forcing you to wear headphones (although people nearby may not appreciate hearing what you hear).

Compared to the previous generation Oculus products, the Quest feels more polished and complete. Yes, the Rift (and now Rift S) offer higher processing and graphics performance thanks to the connected PC, but there’s just no getting around the limitations that the physical tether there represents. And, yes, the Go is lighter and a bit more comfortable to wear, but the limitations of that $199 product’s hardware and tracking quickly become evident. The Quest represents an impressive merging of next-generation technology and smart design that leads to the desired outcome: You stop thinking about the hardware and just embrace the VR experience.

Good, Not Great, App Selection

The Oculus Quest represents the best VR technology 2019 has to offer. However, once setup is complete, and it’s time to get down to the business of using VR, things aren’t as rosy. When the Quest shipped in late May Facebook said there were about 50 apps, which is notably fewer than what’s available on the long-shipping Rift. It’s hard to get a precise count inside the Quest, but there doesn’t’ appear to be a huge number more today. And, unfortunately, hardware limitations mean that many of the titles currently available on the Rift will never make their way to the Quest. Facebook tries to address this scarcity by adding apps such as Oculus TV and Oculus Gallery (which I was pleased to see found my networked PLEX app, bringing to the Quest my stored videos and photos). Of course, volume isn’t everything, but after you spend a few days inside the Quest, you can’t help but feel like there’s just not that much content to explore.

Plus, most of that content has a price tag attached to it. App developers deserved to be paid, of course, but on a new platform such as this, where users are casting about for content they want to try, the biggest points of friction are discovery and the upfront cost of the software. Facebook offers what appears to be a fairly generous return policy on some content (although the terms and conditions document is daunting), but more of this content should be free to try. HTC has cleverly addressed the challenge of making content more discoverable, while making sure to compensate developers for their work, with its VivePort Infinity services that let consumers pay a monthly or annual fee to access VR apps, games, and videos. That service is available on the Oculus Rift, but not the Quest. Facebook needs a comparable service.

The content, or lack thereof, is really where the Quest falls down. While I’m confident that there’s more on the way, I’m less confident that there’s an inbound app that will shift the Quest from a product that early adopters and some gamers will embrace, to one that mainstream buyers need to have. That’s because, despite the hardware advances, and all the talk from Facebook and others about next-generation experiences, on the consumer side of things, VR remains in a gaming and video playback rut. To date, we’ve not seen the types of apps that move these products from something a few people are willing to buy and use to one that many people are excited to try. Most expected this to be some form of social app, but the current environment around that category may delay that vision.

That said, it’s very hard to get developers creating exciting new types of apps when the hardware installed base for VR remains small. And, of course, the installed base for a standalone product such as the Quest is even lower. One hopes that this product, and those that follow, will help to address this challenge. We are coming ever-closer to the point where the VR hardware is “good enough.” We just can’t say the same about the breadth of VR experiences. Yet.

Looking beyond consumers, I do expect commercial VR buyers to embrace the Quest. As I’ve noted in the past, enterprise buyers are increasingly leveraging VR for a wide range of uses cases, led by training. The Quest’s hardware capabilities, combined with Facebook’s ongoing efforts to address commercial users pain points, should make this product quite attractive to IT buyers. And success in the commercial space may buy Facebook, and the broader VR category, the time it needs to figure out just what type of app is needed to eventually drive a compelling mainstream VR story.

Project Libra: Bringing Our Money Closer Together

On Tuesday, Facebook revealed its plans for the much-rumored cryptocurrency that came together under Project Libra. In a white paper, Libra explains its mission as a “simple global currency and financial infrastructure that empowers billions of people.”

All the Right Steps, on the Surface

Reading through the white paper, I could not help but notice how Facebook masterfully ticked all the boxes that at least on the surface would ease concerns.

Governance – Libra will be governed by the Libra Association which is said to have 100 members by the time of its launch in mid-2020. Facebook is one of the founding members, and it will maintain a leadership role in 2019. The Libra Association is a non-profit membership organization headquartered in Geneva – nothing says banking and independence better than Switzerland!

Independence – Facebook created Calibra, a regulated subsidiary that will ensure separation between social media data and financial data.

From permissioned to permissionless – Libra will start with granting permission to someone to be a blockchain validator node, but the goal is to move to allow anyone who meets the technical requirements to be a validator. The set timeframe is five years from the launch of the Libra blockchain and ecosystem. In its early years, the founding members will be the validator nodes.

Despite all of this, it is clear that Facebook is not in the charity business and the driver behind Libra and Calibra is monetizing from commerce by empowering more transactions to go through Facebook. While they say Calibra and Facebook will keep data separate, one has to wonder in how many different ways Facebook will be able to influence people’s purchase through advertising. Also keeping data separate does not mean that there will be no learnings shared between the two companies. If you think I am too harsh, I have two words to say to you: Cambridge Analytica! By now, it is also clear that Facebook does not seem to learn from its mistakes, so my level of trust that there will be no cross-pollination between the two is very low. I also have very little confidence that Calibra will be in a position to make decisions that put users first and Facebook second. In this case, the word I have for you is Instagram!

Unsurprising Government Opposition

I am not the only skeptic, though! As soon as the Libra white paper was made public, the initiative was met with strong opposition both in the US and in Europe. This really should not come as a surprise to anyone given the current stand that governments on both continents have taken on big tech and Facebook in particular. The concerns seem to be multiple, according to Bloomberg.  The French Finance Minister is adamant that Libra cannot become a sovereign currency. The European Central Bank calls for keeping Libra at the highest standard of regulation. In the United States, House Financial Services Chairwoman Maxine Waters asked Facebook to put Libra on pause while Congress and regulators get answers to their questions.

These concerns might touch on different aspects of the cryptocurrency project, but ultimately, they are all about one thing: power. Government officials and regulators are terrified at the idea that Facebook could become even more powerful than it is today.

Of course, power, coupled with a lack of regulations in an area that is the backbone of every capitalist economy is a huge threat to the current status quo and a significant risk for consumers.

Vulnerability is My Main Concern

My real concern about the Libra Project is the vulnerability of the audience it is aimed at. If we believe the problem statement in the white paper, Libra want to be a solution for the underserved:

“All over the world, people with less money pay more for financial services. Hard-earned income is eroded by fees, from remittances and wire costs to overdraft and ATM charges.”

I am sure that India will be a key market for Libra, given the high popularity of Facebook. India is not new to being targeted with payments solutions. Back in 2010/2011, when Nokia was still the market leader in the mobile phone market, a lot of attention and resources were put in using mobile payments and microfinancing to reach those mobile phone users who did not have access to formal lending institutions. Back then, it seemed like a no brainer that phones were the way forward: 850 million phone subscribers vs. 240 million bank account holders! The method, however, was not risk-free. Establishing both agent and customers authenticity was hard, technology issues, lack of regulations around privacy and an arduous process to follow when things went wrong, and one had to find someone accountable.

Relatable? It should be, as I believe some of the risks and concerns that we had with mobile payments and microfinancing are the same as I have with Libra, but on steroids. I say on steroids because the power and the “profit first” attitude shown by Facebook time and again amplify these risks. When you have little to no alternative, you are usually more accepting of the solution that is presented to you even when there is a risk. I am not often one to call for regulations but given where we are with social media because nobody paid enough attention along the way, it is clear to me that we cannot afford to do the same with Libra. Thinking we have time because we look at Bitcoin and see it has not scaled would be a big mistake and would totally underestimate the Facebook machine. Today, if I do not trust Facebook I can delete the app, deactivate my account, or simply never bother to use it, once I come to rely on Libra as the backbone of my finances switching off would not be crippling.

HPE and Google Cloud Expand Hybrid Options

The range of choices that enterprises have when it comes to both locations and methods for running applications and other critical workloads continues to expand at a dizzying rate. From public cloud service providers like Amazon’s AWS and Microsoft’s Azure, to on-premise private cloud data centers, as well as traditional legacy applications, to containerized, orchestrated microservices, the range of computing options available to today’s businesses is vast.

As interesting as some of the new solutions may be, however, the selection of one versus another has often been a binary choice that necessitated complicated and expensive migrations from one location or application type to another. In fact, there are many organizations that have investigated making these kinds of transitions, but then stopped, either before they began or shortly after having started, once they realized how challenging and/or costly these efforts were going to be.

Thankfully, a variety of tech vendors have recognized that businesses are looking for more flexibility when it comes to options for modernizing their IT environments. The latest effort comes via an extension of the partnership between HPE and Google Cloud, which was first detailed at Google’s Cloud Next event in April. Combining a variety of different HPE products and services with Google Cloud’s expertise in containerized applications and the multi-cloud transportability enabled by Google’s Anthos, the two companies just announced what they call a hybrid cloud for containers.

Basically, the new service allows companies to create modern, containerized workloads either in the cloud or leveraging cloud software technologies on premise, then run those apps locally on HPE servers and storage solutions but manage them and run analytics on them in the cloud via Google Cloud. In addition, thanks to Anthos’ ability to work across multiple cloud providers, those workloads could be run on AWS or Azure (in addition to Google Cloud), or even get moved back into a business’ own on-premise data center or into a co-location facility they rent as needs and requirements change. In the third quarter of this year, HPE will also be adding support for its Cloud Volumes service, which provides a consistent storage platform that can be connected to any of the public cloud services and avoids the challenges and costs of migrating that data across different service providers.

On top of all this, HPE is going to make this offering part of their GreenLake, pay-as-you-go service consumption model. With GreenLake, companies only pay for whatever services they use—similarly to how cloud computing providers offer infrastructure as a service (IaaS). However, HPE extends what companies like Amazon do by providing a significantly wider range of partners and products that can be put together to create a finished solution. So, rather than having to simply use whatever tools someone like Amazon might provide, HPE’s GreenLake offerings can leverage existing software licenses or other legacy applications that a business may have or may already use. Ultimately, it comes down to a question of choice, with HPE focused on giving companies as much flexibility as possible.

The GreenLake offerings, which HPE rebranded about 18 months ago, are apparently the fastest growing product the company has—the partner channel portion of the business grew 275% over the last year according to the company (though obviously from a tiny base). They’ve become so important, in fact, that HPE is expected to extend GreenLake into a significantly wider range of service offerings over the next several years. In fact, in the slide describing the new HPE/Google Cloud offering, HPE used the phrase “everything as a service,” implying a very aggressive move into a more customer experience-focused set of products.

What’s particularly interesting about this latest offering from the two companies is that it’s indicative of a larger trend in IT to move away from capital-intensive hardware purchases towards a longer-term, and theoretically stickier, business model based on operating expenses. More importantly, the idea also reflects the growing expectations that IT suppliers need to become true solution providers and offer up complete experiences that businesses can easily integrate into their organizations. It’s an idea that’s been talked about for a long time now (and one that isn’t going to happen overnight), but this latest announcement from HPE and Google clearly highlights that trends seem to be moving more quickly in that direction.

From a technology perspective, the news also provides yet more evidence that for the vast majority of businesses, the future of the cloud is a hybrid one that can leverage both on-premise (or co-located) computing resources and elements of the public cloud. Companies need the flexibility to have capabilities in both worlds, to have additional choices in who manages those resources and how they’re paid for, and to have the ability to easily move back and forth between them as needs evolve. Hybrid cloud options are really the only ones that can meet this range of needs.

Overcoming the complexity of modern IT still remains a significant challenge for many organizations, but options that can increase flexibility and choice are clearly going to be important tools moving forward.

Podcast: E3, AMD, Tech Industry Regulation, Google Pixel 4 Preview

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing the announcements from the E3 gaming show, including big GPU and CPU announcements from AMD, discussing recent comments on potential antitrust movements against major tech companies from the US government, and trying to discern Google’s strategy around leaking Pixel 4 news early.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Government’s Regulatory and Antitrust Hypocrisy

I’m not sure if I fall into the minority on this viewpoint, but the more I talk to folks around the tech industry about the regulatory concerns the more I’m convinced government regulation, or a break up of big tech, is not the answer. In my mind, there are two things that are low hanging fruit to discuss regarding a modern antitrust environment.

Should the Definition and Circumstances Change?
I think it is clear from recent news, and communications from the DOJ and the FTC that they are attempting to modernize what is understood as antitrust or anticompetitive behavior and no longer is market share a defining element. For those interested I highly recommend reading this article in full which is the speech of Assistant Attorney General Makan Delrahim at the Antitrust New Frontiers Conference.

When listening questions fielded to CEO’s or executives lately on whether they feel they are a monopoly, they have often used their market share as a defense. Small market share is simply no longer a defense in this new era, and instead, the conversation will shift to two areas, competition, and consumer harm.

This is essentially how Makan Delrahim states the purpose of antitrust and the core question in this line of his speech “Therefore, the right question is whether a defined market is competitive. That is the province of the antitrust laws.” Essentially all discussion going forward should be centered around this topic if a market is competitive and if any incumbents are abusing their leverage to keep stifle competition or innovation in some cases.

On this topic of both a re-orienting our understanding of antitrust in the digital era, and competitive market dynamics, I found these following points from Makan’s speech quite interesting:

Finally, the Antitrust Division does not take a myopic view of competition. Many recent calls for antitrust reform, or more radical change, are premised on the incorrect notion that antitrust policy is only concerned with keeping prices low. It is well-settled, however, that competition has price and non-price dimensions.

Price effects alone do not provide a complete picture of market dynamics, especially in digital markets in which the profit-maximizing price is zero. As the journalist, Franklin Foer recently said, “Who can complain about the price that Google is charging you? Or who can complain about Amazon’s prices; they are simply lower than the competition’s.” Harm to innovation is also an important dimension of competition that can have far-reaching effects. Consider, for example, a product that never reaches the market or is withdrawn from the market due to an unlawful acquisition. The antitrust laws should protect the competition that would be lost in that scenario as well.

If you follow executive commentary, you note that price has also been something mentioned as monopoly defense. Apple was quick to point out that 80% of apps on the App Store are free, or Amazon points out that they are generally the most competitive on prices and are working tirelessly to keep prices low. It is abundantly clear price is also no longer a defense against monopoly. This speech makes it clear the issue at hand is not Apple’s pricing per se but that there are only two app stores. While competition is alive and well, arguably, on the App Store, App Store competition itself is not alive and well.

The digital era is one of the conglomerates. There is no way around that truth, and in this era, it seems, antitrust initiatives will focus more on how said tech conglomerates use their leverage to stifle competition. This opens the door, in my opinion, to a more crucial and better competitive analysis to emerge. In particular, for many companies themselves, who I have felt for some time, have worked closely with their legal teams to get as close to the edge of antitrust behavior without crossing it. Many companies may need to rethink some of their long-term strategies in light of a much larger magnifying glass being placed on them going forward.

Correlation, Causation, and Hypocrisy
A few other observations on this matter. In this speech, one of the more interesting viewpoints used was one looking historically at antitrust pursuits and remedies as a matter of causation. For example, Makan says about Microsoft “Although Microsoft was not broken up into smaller companies, the government’s successful monopolization case against Microsoft may very well have paved the way for companies like Google, Yahoo, and Apple to enter with their own desktop and mobile products.”

Note the language “may very well have.” Most of in the industry who have studied it for a long time can say with a high degree of certainty the antitrust suit against Microsoft was absolutely not the reason Google or Apple saw success in their mobile operating systems. The danger of the governments view here is to read too much into past successful antitrust legislation and view it as an anecdote for other successes today. That discounts a vast number of other dynamics that led to other companies successes.

As we build out thinking around how companies have been operating, and more specifically using their leverage, I do think areas of collusion and exclusivity are worthwhile areas for antitrust regulators to take a deeper look at some companies. That being said, I did find it ironic that when it came to collusion, the following example was used:

The Antitrust Division may look askance at coordinated conduct that creates or enhances market power. Consider, for example, the Antitrust Division’s investigation of Yahoo! and Google’s advertising agreement in 2008. The companies entered into an agreement that would have enabled Yahoo! to replace a significant portion of its own internet search advertisements with advertisements sold by Google. The Antitrust Division’s investigation determined that the agreement, if implemented, would have harmed the markets for internet search advertising and internet search syndication where the companies accounted for over 90 percent of each market, respectively. The agreement was abandoned after the Antitrust Division informed the companies that it intended to file a lawsuit to block the implementation of the agreement.

Here again, something that seemed well intentioned may be a factor in hurting Yahoo more than helping since Yahoo has since faded into irrelevance and leaving us in the west with really only one search engine. Yahoo working with Google may have helped them prolong their life long enough to come up with something new or innovate. We will never know. The point remains regulation runs the risk of having the exact opposite of its intentions, and sadly, most of these regulators are not informed enough to play out all scenarios as a part of their decision-making process.

Furthermore, it seems odd the level of hypocrisy antitrust regulators have shown up to this point. Think about things like cable monopolies who had zero innovation, high prices, very little competition in specific regions. Banking is another area, that while it seems like customers have a choice, there has been very little innovation, terrible customer experience, high fees, and a range of things that government regulations have enabled which makes the barrier of entry often too high for many startups.

As I said at the beginning, sometimes regulation is helpful, but more often than not, especially in the digital era, I think it can be argued it has done more harm than good.

That being said, I’m glad the government will start taking a look at certain issues, however, I worry they are ill-equipped to do so in many areas, and my fear is too much overstepping in a way that ends up hurting competition, consumers, and innovation in unintended ways.

Netflix Needs to Pivot. Again

Of all the success stories in tech over the past 20 years, the evolution of Netflix is among the most fascinating. Few companies in history have pivoted successfully multiple times. And Netflix is now faced with a situation where they will have to evolve, yet again.

First, a quick history. Netflix, in its initial incarnation dating back to 1997, was that ‘little red envelope’ company. During the time of peak video store/Blockbuster, Netflix created a successful subscription business of movie/TV series rentals by mail. In that pre-broadband/pre-streaming period, Netflix was that era’s Grubhub and DoorDash. And for those who lived a distance from the nearest video store, Netflix was a godsend.

Then, broadband arrived, and with it, services such as iTunes for movie purchases and rentals online. Over a period of about ten years, video stores essentially disappeared, with Blockbuster being among the biggest casualties. But Netflix successfully pivoted here, maintaining its legacy mail order business while simultaneously building a success streaming video business during the 2000s. Their ability to do this on a global basis was somewhat dependent on the availability of decent quality broadband to homes.

Then, yet again, Netflix saw the writing on the wall and engineered a second successful pivot. Competing streaming services had emerged, and on-demand capabilities and libraries from cable companies and other providers continued to grow. This meant that much of the content available on Netflix was also available through other sources. So Netflix made a bet on original content, beginning with House of Cards. Hard to imagine that was only in 2013. During the past six years, Netflix has plowed billions into original content, producing – literally– thousands of scripted shows and movies, globally. Today, Netflix is effectively in two businesses: people still subscribe to Netflix for its vast library and terrific UI; but increasingly, Netflix is another content channel, just like HBO, Showtime, or Hulu. In an era with plenty of competition in streaming, Netflix has continued to grow quickly and have incredible subscriber retention.

Which brings us to this moment, and the need for Pivot #3. Another exogenous market development has the potential to upend Netflix’s business. The slew of media M&A that has occurred over the past couple of years, and the imminent launch of new streaming services from Disney, Warner Media (AT&T), Apple, and others, will have a dual impact on Netflix: the first is a much larger number of streaming options competing for the consumer’s dollar; and the second is a dramatically altered content landscape. With Disney et al getting into the business, Netflix is losing, and is getting prepared to lose, large chunks of its content library, as those companies choose to keep their content on their platforms. For example, Netflix will lose most of its Disney content (which includes many of Disney’s properties from Marvel to Lucasfilm), and is also in danger of losing some TV staples that are among some of its most popular titles, such as Friends and The Office (here’s a good list). Even with all its Originals, 2/3 of the viewing hours on Netflix is licensed content. Another dynamic is that with Apple and other well-heeled players getting into the game, competition for top talent is becoming increasingly intense (this is a great time to be in the upper echelons of the content business!).

The burgeoning of streaming options is competing for viewers’ finite dollars and the explosion of content choices is competing for viewers’ finite time. This is forcing Netflix to dust off its strategic plan yet again. This time, however, it’s less of a dramatic pivot and more of an evolution that has two components. The first part is that Netflix has been steadily raising prices, in order to pay for both more original content and escalating rights fees for licensed content (sound familiar, haters of cable companies?). In the same way cable bills went up largely because of mushrooming fees for sports rights, consumers will pay for this downstream. For Netflix, revenue growth is slower than content expenditure growth (sound familiar, haters of cellular companies?).  But Netflix is clearly taking the long view.

The second component is that Netflix is both broadening and deepening its original content productions. Some detractors describe Netflix as having become the ‘Wal-Mart’ of content. But it’s more like a department store (or at least what department stores used to be), offering content for multiple ages, life stages, and preferences, from lowbrow to highbrow. Another key component to this is investing in more regional content. Although much of its original content library is available globally, Netflix is also creating a lot of content for audiences in particular geographies, that not everyone might see. The fact that Netflix is a global company will be a key strategic advantage, going forward.

Now usually, life’s not so good for companies that raise prices as they lose content. It is interesting that so far, Netflix is relatively unscathed. Its stock price is up 50% this year, and subscriber growth has been solid. Wall Street does not seem to be overly worried. A year ago, there were all sorts of articles popping up, about Apple, Amazon, Warner Media, and Disney being ‘Netflix Killers’. But the tone has changed. It now seems that each is looking more for its specific place in the universe: Disney as the inexpensive add-on that many people will buy, since, in the words of my eighteen-year-old, “owns most stuff”; Apple, with a more Amazon Prime-like model for video; AT&T, adding some gravy to its signature property HBO; and Comcast Universal, which has announced it will launch a streaming service but whose motivations are more to have some leverage vis a vis competitors and some offering for the cord-nevers.

I think Netflix should weather the storm just fine, at least in the short-to-medium term. Mainly because it has made the right moves to become the ‘must have’ channel in a typical consumer or household’s content lineup. And even as the content universe churns up, Netflix will still have the largest stockpile of its own and others’ content. And it has a superior user interface, is on everybody’s platform (including its competitors), and enjoys a sterling reputation among consumers. But it’s interesting to see how Netflix in 2020 is very different than Netflix in 2010, which was very different from the Netflix of 2000. If Netflix is already the stuff of B-School business cases, another chapter is waiting to be written.

Why Apple Could be Interested in Robo-Taxi Startup Drive.ai

Various reports surfaced last week that stated Apple was looking to acquire a robo-taxi startup called Drive.ai. Drive.ai is developing a self-driving shuttle service and has raised $77 million thus far.

While Apple has not confirmed this acquisition, purchasing a company like this, that has technology patents, and a team of engineers just focused on self-driving cars, would be a very interesting move by them. They recently laid off 200 from their Project Titan autonomous driving group and adding this type of engineering talent to bolster this project could be highly strategic for them as they continue to research what Apple could eventually bring to this very nascent market.

While we still really have no clue what the end goal of Project Titan is actually about, but an acquisition of a company that was working on a shuttle service could give us a hint.

In the world of autonomous driving, there are five levels of autonomy:

Over the last year, I have had high-level discussions with folks at Ford and GM, and both have told me that they believe it will be well into the mid-2020s before they would sell self-driving autos to customers. Even that date may be a stretch given people having decades of being in complete control of their vehicles and being willing to trust technology to take over their driving experience completely.

Ford and GM will start by adding level 1 level 2 features to their autos over the next two-three years, which are good first steps towards fully automated driving. From a technology standpoint, Tesla, Waymo and some automakers believe they can deliver fully automated vehicles by 2021, but convincing the public to adopt self-driving cars for themselves will take a lot longer.

In talks with the Ford and GM, both have suggested that the first market they see developing for self-driving cars will be in the area of level 5 taxi or shuttle type services. Although there are still much to be done to perfect level 5 vehicles, many who are working in this area think that this will be the way the public is introduced to autonomous driving and will be an important step that needs to be taken before they sell level 5 cars to the public.

The basic idea behind this would be to deliver an Uber or Lyft like service where a self-driving car will come to your location on demand and take you to your destination. Uber and Lyft are already preparing for this type of service in the future. The auto dealers see this as a good opportunity to create a totally new business that would augment their current gas and hybrid business that they expect to have for at least another 15 years if not more. This would also help them develop the acceptance of self-driving vehicles and brand loyalty for the time when they can sell self-driving cars to customers directly. It can also serve as an important new business model for them should many of their customers decide not to buy any vehicle in the future and just rely on automated vehicles to provide on-demand shuttle and taxi services.

I was told by a source close to the auto industry that we could see the automakers start buying property in different parts of any region in the US and around the world in which they could keep these cars parked, and have large charging stations to keep them powered. They would then be able to provide an on-call taxi or shuttle service to people who need to get from point A to Point B.

Apple’s potential acquisition of Drive.ai likely acknowledges the fact that self -driving shuttle and taxi services could be the first major step in bringing automated vehicles to the public.

I am not convinced at this time that Apple is doing their own self-driving vehicles, and instead of developing technologies that could be crucial for automakers and others to use in their own programs.

Given Apple’s business model, an automated vehicle will most likely be another node to deliver Apple apps and services. Yes, they could provide fundamental technology like maps, AI-based navigation, AI-based cameras, and specialty sensors needed for level 5 driving automation.

But imagine if you get into a Robo-Taxi and can personalize and customize the audio, video and Communications experience for your drive. If people are not driving, they will want something to do during the drive time.

In fact, if Apple is involved with the fundamental design of these level 5 cars, they could build in Wifi and Bluetooth systems and have video screens in each seat. When a person gets into an Apple equipped car, it automatically connects to an iPhone and makes it possible to view TV shows, movies, listen to your music and make video calls. One of the virtues of a self-driving car is that you are not driving it. Given you have downtime, you could fill it with Apple based services that make Apple even more valuable to you.

I find project Titan to be one of Apple’s more fascinating research projects and buying a company focused on shuttle or taxi services could make a lot of sense for Apple should they want to play in the autonomous vehicle’s future.

Recode’s CodeCon News Show Tech Still in Denial

Recode’s Code Conference is taking place this week is an annual appointment of the who’s who of tech with Kara Swisher, Peter Kafka, and crew. This year’s lineup includes talks with YouTube CEO Susan Wojcicki, Facebook executives Adam Mosseri and Andrew Bosworth, Amazon Web Services CEO Andy Jassy, Fair Fight founder Stacey Abrams, Netflix vice president of original content Cindy Holland, Russian Doll star Natasha Lyonne, and Medium CEO Ev Williams, to name a few.

There is still one day to go, but so far, one trend seems to come through: much of tech is still in denial about the issues tech and society are facing.

The Relationship Between Tech Companies and Government Sure Is Complicated

The past few months have seen the relationship between government and tech giants become much more complicated. From the call to breaking up Amazon, Google and Facebook, to antitrust probes and calls for regulations on AI, facial recognition, and more.

At Recode’s Code Conference speakers touched on many of these topics but gave little reassurance that they grasp the urgency needed to address some of these issues.

Instagram’s Adam Mosseri said that while splitting up Facebook and Instagram might make his life easier it is a terrible idea because splitting up the companies would make it exponentially more difficult to keep people safe especially for Instagram. He went on to say that more people are working on integrity and safety issues at Facebook than anybody who works at Instagram. This is not the first time the argument that size matters has been made. It seems disingenuous, though, not to point out that size matters also as a negative point. It is, in fact, the size and the reach Facebook has that makes it such an important platform to target for bad actors from election manipulation to hate speech. One could argue that a smaller company, while more limited in resources, would also limit the appeal.

AWS CEO, Andy Jassy, said he’d like to see federal regulation on how facial recognition technology should and should not be used. His eagerness, however, was driven by a concern that otherwise we would see 50 different laws in 50 different states. He also stated: “I strongly believe that just because the technology could be misused, doesn’t mean we should ban it and condemn it.” Amazon, as well as Salesforce and Microsoft, all faced employees’ criticism on their involvement in providing technologies to ICE and the US Custom and Border Protection Agency. At Codecon, immigrant advocacy organization RAICES accused tech companies of supporting Trump’s administration no Tolerance stand on immigration by making their technologies available to the agencies. While tech providers have been working with government agencies for years, the higher level of intricacies between privacy and civil liberties on the one hand and government interest on the other are raising the scrutiny, especially under the current administration.

Facebook to Reveal New Portal Devices

Talking about being in denial. Andrew “Boz” Bosworth, Facebook’s vice president of AR/VR, told The Verge’s Casey Newton that the company has “lot more that we’re going to unveil later in this fall,” related to Portal, including “new form factors that we’re going to be shipping.” While no sales numbers were provided during the interview, Bosworth said that Portal’s sales were “very good.”

It is still unclear to me how Portal can be a long-term success for Facebook. The smart camera and smart sound that follow the subjects were probably the most significant appeal for Portal. Alexa built in added to the draw for those users who might have liked the technology but were not that keen at letting Facebook in their home.

I do wonder how long it will take both Amazon and Google to add similar technology to their screen-based devices and the impact that this will have on Facebook’s hardware. Both Amazon and Google scored better than Facebook did in our privacy and trust study signaling that consumers will have a higher propensity to let those brands in their home before they let Facebook in.

I am also not convinced that Facebook’s focus on human connection transfers from messenger to Portal. The kind of personal exchange that Portal is focusing on does not involve the same type of people we tend to engage with on Facebook Messenger. According to Similarweb.com in 2018, Facebook Messenger had the second largest audience following WhatsApp. While heavily skewed to North American Facebook Messenger counted 1.3 billion users worldwide. Messenger proved to be a very effective marketing channel, which means that many of those interactions are between consumers and a brand or consumers and a bot.

While I understand how Portal offers the opportunity to create a more meaningful connection with users, I feel that Facebook is underestimating how much more cautious and irrational people become when we talk about privacy in the home. Ultimately, I do not see Facebook being able to deliver a differentiated smart home, assistant, or video chat service that will drive consumers to invest in their ecosystem over that of Amazon or Google.

We Are Trying Our Best, We Are Very Sorry

At the end of day one, Kara Swisher was asked if there was a thread transpiring from the interviews and she answered everybody was saying: “We are trying our best, we are very sorry.”

It is hard to take the act of contrition on display as genuine when so much is at stake with tech at the moment. YouTube’s CEO Susan Wojcicki was asked by Ina Fried, the chief technology correspondent at Axios: “I’m curious, are you really sorry for anything to the LGBTQ+ community, or are you just sorry that they were offended?” the almost 3 minutes long string of words which started with “I am really personally very sorry” was a non-answer to a straightforward question. Wojcicki pointed to overall improvements to hate speech that will benefit the LGBTQ+ community but really did not explain any of the thought processes behind the decision.

Unfortunately, Wojcicki is not alone when it comes to the leadership of tech giants lacking accountability and transparency. Some commentators say these issues are not black and white, which is true, but that should not stop us from trying to resolve them.

It seems to me that most tech companies are not even willing to admit there is an issue, which will make it impossible to find a solution. Wojcicki, for instance, was not even ready to acknowledge that social media platforms contribute to radicalization. In a refreshing twist of events, Twitter seemed more in touch with reality as top legal counsel Vijaya Gadde said: “I think that there is content on Twitter and every [social media] platform that contributes to radicalization, no doubt.”

I am an optimist, and I like to see the good in tech. I certainly don’t want to fix something that is not broken, and I worry about government intervention because of the lack of understanding of the world we leave in and the need to put their political agenda before us. That said, with the changes that technologies such as AI, ML, and 5G are bringing, it is time for big tech to step up their accountability, transparency, and ethics game. Whether you believe Voltaire or Spider-Man said it, it is never more on point than today: “With great power comes great responsibility.”

The Video Surveillance Debate

It seems an interesting conversation is going to have to start taking place in many democracies. Machine learning has reached a place where it is ready to be deployed for facial recognition, threat assessment, and machine intelligence based surveillance. Some of our readers who live, or travel to China frequently, know this technology is already mass deployed in nearly all major cities where video cameras are prevalent. For those of you who are not terribly familiar with some of the implementations of how China is using this, a quick story.

A friend of mine travels to China often to recruit students for the academy where he is the head of school. On a recent trip, he and his local associates went to buy train tickets the day before he had to travel inland. When he returned the next day to board the train, he was denied entrance because his ticket was for the day prior (the day he purchased them) not for the day he intended to travel. Trying to sort out the mistake, he went to the ticketing camera and explained the ticket he purchased was supposed to be for today and not yesterday as the operator claimed. The sales associate behind the counter asked him to wait a moment and went into a back room. He came out moments later and showed my friend video of him and his translator purchasing the tickets, with caption translation, and confirmed the error in the translator who did, in fact, say to buy the tickets for the prior day travel, not the day he intended. He was telling me this story and was astonished at the speed in which the sales associate found the video of him purchasing the tickets and brought back just that snippet of video to confirm the error was not the sales associates fault.

There is no doubt the sales associate was able to use the current video of my friend to search for the video the day before to get the recorded account of his transaction in order to sort out what actually happened. As you can imagine, this sort of thing will make a lot of Westerners very uneasy. But we should have a more open conversation about the benefits of having more intelligent surveillance and what sort of regulation needs to be implemented in order for the West to use this technology safely and to the benefit of its citizens.

Progress is Always a Trade-Off
Transitions from the old to the new are never terribly easy at a societal level. There is always a demographic who prefers the old way to the new ways, and either does not see the trade-offs or, most likely, doesn’t feel the trade-offs are worth it. This is all ok and normal, and we have a great deal of human history to rely on to understand these lessons.

What makes the video surveillance and machine intelligence layer interesting is at a fundamental level, when we go into public spaces, we are already being surveilled and have been for some time. Nearly all major public spaces have had surveillance cameras for years as a matter of store safety, insurance in case of robbery or theft, theft prevention, and a range of other reasons. What’s new in this equation are systems that can identify individual people and track them accordingly. This is likely where most of the debate and regulation is likely to focus.

If you were to ask any normal citizen, they would probably be OK with surveillance intelligence that was able to do threat detection, threat prevention, and a host of other things that apply to public safety. Where people will likely get more uneasy is when they can be individually identified. Here again, there are trade-offs and benefits, and this is likely where some regulation will have to set in.

At CodeCon Amazon’s AWS CEO Andy Jassy made mention of Amazon’s sale of their facial recognition technology solutions (Recognition) to the US Government and defended their position. However, he also went on to explain why regulation is needed, and he is in support of said regulation around facial recognition technology for video surveillance. Jassy’s position is one of understanding why there are concerns but does not feel there are not benefits to such technology, and he does not feel we should immediately ban or condemn new technology.

I tend to agree with Andy in that there are great benefits, especially around public safety and accountability, that will be found valuable to countries citizens. That being said, one could argue that while government regulation is needed or necessary, that any technology in this category should be run by a private company instead of any government or state. I say this primarily on the point I made about accountability and how this technology can and would hold not just public citizens accountable but also government and state agencies. My hunch is that a private organization that is protected from government or state influence is the better organization to manage and ensure the protection of citizens when it comes to this kind of technology.

Another idea is to develop a consortium of private companies that could include companies like Amazon, Apple, and Microsoft, (maybe Google but I know how people feel about them) and this consortium would be responsible for responsibly deploying this technology and protect the interest of the public.

I bring this up because it is now we have to start having these conversations and start developing a plan before it gets out of hand or too hard to reign back in control from the wrong hands.

AMD’s Gamble Now Paying Off

For a company that just a few years ago some people had essentially written off as dead, AMD has certainly come a long way. Not only are they one of the top ten best performing stocks of 2019 so far (after enjoying similarly high rankings for all of 2018), the company recently announced major new partnerships with the who’s who of big tech companies: Microsoft for the next generation Xbox One game console (Project Scarlett), Google for their Stadia cloud-based game streaming service, Sony for the PlayStation 5 game console, Apple for the new MacPro, and Samsung for GPU IP intended to power future generations of Galaxy smartphones and tablets.

On top of that, the company just launched its latest generation desktop CPUs, the Ryzen 3000 series at Computex two weeks ago, and yesterday at E3, debuted its newest Radeon GPU cards, codenamed Navi, that are based on a new GPU Architecture the company calls RDNA (short for Radeon DNA). The first commercially available products from the Navi effort are the 5700 line of desktop GPU cards, designed specifically for the gaming market. Also, in a nod to the importance of CPUs in gaming, the company added a new top-end addition to its 3rd generation Ryzen line: the 16-core, 32-thread capable Ryzen 9 3950X.

All told, it’s a broad and impressive range of offerings, and it’s tied together by a few critical decisions the company leaders made several years back. Specifically, AMD decided to aggressively go after leading-edge 7nm process architecture for both CPUs and GPUs and, importantly, chose to pursue a chiplet strategy. With chiplets, different components of a finished chip, made with different process technologies, could be tied together over a high-speed connection (AMD dubbed theirs Infinity Fabric) instead of trying to put everything together on one monolithic die. Together, these technology bets enabled the company to reach a point where it’s starting to do something that many thought was unthinkable: challenge Intel on CPU performance and challenge Nvidia on GPU performance. While final numbers and testing still need to be done before official winners are declared, it’s clear that AMD can now be considered in the elite tier of performance in the most important semiconductor markets, particularly for CPUs. In the GPU space, AMD chose not to compare its new 5700XT to Nvidia’s highest performance GeForce RTX 2080 and RTX 2080 Ti cards, but given the aggressive $449 pricing of the new AMD 5700XT, that certainly makes sense. (AMD is quick to point out that Apple claimed the new AMD Radeon Pro Vega II Duo powered multi-GPU cards in the Mac Pro are the fastest desktop GPUs in the world, but they’re really more of a workstation product.)

The momentum that AMD is currently enjoying is clearly due, in part, to those big technology bets, particularly around 7nm, as well as the fact that they are one of a few major semiconductor players with significant CPU and GPU assets. Again, many industry observers questioned that strategy for a long time, but now that the company is starting to leverage technologies from one side to the other and is really integrating its approach across what, admittedly, used to be two very distinct groups, the payoffs are starting to happen. In addition, the coordinated efforts are allowing them to do things like be the first company to integrate PCIe 4.0 across both CPUs and GPUs, as they’ve done with the latest Ryzen and Radeon products, as well as leveraging Infinity Fabric for both CPU-to-CPU connections (in the Ryzen line) and GPU-to-GPU connections (in the Pro Vega II inside the Mac Pro).

The company’s vision is now broader, however, as it’s started to reach into the server and datacenter market with its Epyc CPUs and Instinct GPUs, even launching what it claims will be the world’s fastest supercomputer in conjunction with the Oak Ridge National Laboratory. The overall Epyc and Instinct market share numbers are still small, and the cloud and datacenter markets are still generally very loyal to Intel and Nvidia, but the fact that AMD is back to being able to compete at all in the server market once again highlights the relevance of its core technology decisions. In addition, though it’s early, AMD’s newly announced partnership with Samsung could finally help the company make an impact on the mobile market—where they have been completely absent. With growing interest in cloud-based game streaming, we could even end up seeing AMD technology in the cloud talking to AMD technology in mobile devices, which is quite a stretch from where they’ve been.

In the end, it’s great for both consumers and businesses to see a truly rejuvenated AMD, because it inevitably forces all of its competitors to get better, which in turn, leads to better products from all the major players, as well as a more dynamic computing market. To be clear, AMD still needs to execute on the broad vision that it has laid out for itself—and unfortunately, execution issues have slowed the company in the past. Still, it’s encouraging to see some key strategies driving new opportunities, and it will be interesting to see what AMD is able to do with them as we head into the future.

Sick of Social Media Breaches? Ready to Pay for Privacy Help? Here’s What to Look for in Social Monitoring Products

Prior to the ironically privacy-focused F8, Facebook admitted that it “unintentionally uploaded” 1.5 million people’s email contacts without their consent. And earlier this spring, yet another Facebook data breach occurred: More than half a billion Facebook records were left exposed on Amazon’s cloud computer servers, open and available for public perusing – and for theft.

This probably doesn’t shock you – in fact, recent data indicates that it doesn’t. ID Experts® conducted a survey on consumer sentiments toward social media and privacy and found an interesting paradox. Although more than three-quarters of adults believe that their security is at risk on social media, that doesn’t prevent 63% from logging on to Facebook every day, 42% from browsing YouTube and 29% from checking Instagram.

At first glance, this certainly seems like strange behavior. We wouldn’t continue paying rent in a building that experienced constant break-in and theft; why do we continue using services that repeatedly fail to prevent data breach and data theft? And why does news that the service has experienced yet another breach leave us completely unfazed?

After considering the data at length, an obvious conclusion emerged – but one with radical implications: Social media users simply don’t know how to protect themselves. At an apartment complex, you’d know how to protect your space and would install stronger locks and a home security system to keep thieves from entering. But, for the most part, social media users aren’t acquainted with all the processes that allow thieves to access, share and abuse their data.

So – what can you do? Although the federal government certainly has a key role to play in protecting user privacy, and social media platforms must step up their game, consumers can’t continue to let their data be exploited while they wait for leaders to hammer out a legislative solution. They need a tool that will allow them to know when their data has been leaked and to protect them from the negative consequences of that leak.

We spent several months thinking about this, searching for ways to empower consumers to manage any threats to their online identity – everything from your profile and the content you upload to the content that is sent to you by others. After extensive conversations with consumers and a good deal of time in development, here’s what we think you should look for when hunting for a program to protect yourself:

Look for software that identifies impersonators. Celebrities aren’t the only ones whose social media profiles are duplicated; Facebook had to delete over half a billion fake accounts in the first three months of 2018. Find a product that scans through social media networks, hunting for any accounts that use the same name, nickname or profile picture that your account does and prompting you to report them to the social media network.

Look for software that stops doxxing. Doxxing – the leak of personal information online – can compromise not only your online identity but, in more extreme cases, your physical safety. The ideal product will notify you if you, yourself, or someone else shares personal information online and then allow you to remove the information so it doesn’t get into the hands of marketers, spammers or someone with more sinister intentions.

Look for software that cuts objectionable content. Most of us have had the experience of receiving information that we’d rather not know – inappropriate images, disturbing language or unwanted solicitations. To prevent this, seek out software that watches incoming and outgoing posts, looking for illicit activity, drugs- and violence-related content and screening your accounts so you don’t have to.

Look for software that fights phishing and malware. Although malware has historically been considered the main threat to our online safety, recent data from Microsoft reveals that phishing attacks have caught up. But some of these attacks as so cleverly disguised that its difficult to avoid clicking on it. If you’re considering some sort of social monitoring software, ask whether it will notify you about phishing and malware attempts.

It’s easy to be discouraged and even indifferent in a world where data breaches have become normal and identity theft a common problem. But innovative software designers are working to change this paradigm, empowering users to enjoy the benefits of these platforms, free from the fear of exploitation. Before the next breach hits the headlines, take the time to do some research on social safety products. You deserve to have just as much peace of mind online as you do in your home or office. The key thing is to look for the products and people that can provide it.

Podcast: Apple WWDC19

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell analyzing the announcements from Apple’s Worldwide Developer Conference keynote, including the impact of iPadOS, the details of the new Mac Pro, and the significance of the Sign In with Apple feature.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Apple Shows Pro Content Creators Some Love with New Mac Pro

I attended Apple’s WWDC keynote this week and to say it was overstuffed with important announcements would be an understatement. From key updates to all of its operating systems (including the launch of iPad OS), to new developer tools such as Swift UI and ARKit 3, to new privacy-focused rollouts including Sign In with Apple, the vibe was one of a company firing on all cylinders with a real sense of confidence and even a bit of swagger. Nowhere was this more evident than in the long-awaited and symbolically important announcement of the new Mac Pro.

A Long Overdue Release
Apple’s Mac Pro has long been a favorite of professional content creators, especially those working in the fields of video editing and computer-generated imagery (CGI). However, the last major Mac Pro launch from Apple happened back in 2013, when the company rolled out the current cylindrical version of the product with a starting price of $4,000. A bold design unveiled at WWDC that year; Apple filled the product with unique technology designed to prove the company was still head of the class when it came to innovation. Unfortunately, Apple made some technology bets inside that design that failed to come to fruition, and that put it in a difficult position when it came time to refresh the product. And so, instead of major refreshes that would keep it relevant, the product saw minor speed bumps that saw it fall further behind the competition. The product languished for years, leaving many with the impression that Apple had abandoned some of its most ardent users behind.

Things got so bad that Apple took the unusual step of sitting down with a small group of technology journalists way back in April 2017 to announce a “completely rethought” version of the Mac Pro was in the works. Apple said it would ship…sometime in the future. More than two years later, Apple has finally announced the new Mac Pro, and a new high-end monitor called the Pro Display XDR both of which will ship this fall.

A True Mac Powerhouse
Apple executives left out any cheeky comments about the company’s ability to innovate and let the new Mac Pro, which starts at $5,999, speak for itself. The design returns to a more familiar tower form factor, but a highly modular one focused on accessibility, upgradeability, and—importantly—air flow. Cooling is key here as the system can support workstation-class Intel Xeon processors with up to 28 cores, up to 1.5TB of memory (via 12 DIMM slots), graphics including the 64-GB Radeon Pro Vega Duo (with two GPU chips), and a 1400W power supply.

Beyond its top-shelf industry parts, the new Mac Pro also includes a custom-designed accelerator card Apple Afterburner that ramps up the performance for video editors. Afterburner has a programmable ASIC designed to speed the conversion of native file formats and capable of decoding up to 6.3 pixels per second. Apple says the card can decode up to three streams of 8K ProRes RAW video and 12 streams of 4K ProRes RAW video in real time.

The system has a steel frame and encased in an aluminum housing that lifts complete away to give full access to the system. The aluminum housing features unique air-flow channels on the front and back that give the unit a bit of a “super cheese grater” look. Apple carries that look over to the rear of its Pro Display XR, and it’s not merely a design choice as the monitor itself has serious cooling requirements.

That’s because inside the 32-inch, 6K display Apple is using a large array of LEDs to drive an astounding 1,000 nits of full-screen brightness and 1,6000 nits of peak brightness. The display supports a P3 wide color gamut and 10-bit color for over 1 billion colors. This is a true professional-caliber monitor that Apple says can compete with other industry products that can cost upwards of $40K. The base model of the display starts at $4999, one with a special matte option will cost $5,999, each without a stand.

Apple spent a fair amount of time talking up the monitor’s new adjustable stand, but when execs unveiled that the stand would cost $999, there was an audible negative reaction among the WWDC crowd. This, to me, was among the only Apple missteps the entire keynote, and it’s really one more of perception than anything else. Apple knows that many professional content creators already have a high-dollar stand, and so the company is wisely offering the display sans stand. I’m certain that if Apple has said the display started at $5999, or you could buy it without the stand for less, nobody would have batted an eye. That said, I do find it absurd that to utilize the display with an industry standard VESA mount Apple force the purchase of a $199 VESA adapter.

Setting the Future Stage
A Mac Pro that starts at $5,999, with a display that starts at $4,999 (minus a stand), is clearly not a product for the average consumer. And that’s the point. With these new products, Apple is showing professional content creators some serious love. Back in 2017 when Apple announced plans for new Mac Pro, many of us saw that as a good sign, but as time wore on, it became concerning that it was taking so long. How hard, we wondered, is it to build a tower workstation? Apple rewarded that long wait with a true purpose-built system that should deliver world-class performance. Plus, Apple has created a design here that should allow for the type and cadence of hardware refreshes required by this segment of the market.

The other important thing Apple accomplishes with new Mac Pro is to establish a clear distinction between this product and the rest of its Mac lineup. Why is this important? Because most of us expect Apple to shift the rest of its Mac lineup over to its own A-Series, ARM-based processors at some point in the future. When that happens, and Apple talks up all the benefits of this switch, it was conceivable that many would point back to this 2019 launch and suggest that, once again, Apple had launched a Mac Pro that was out of step. However, this design—and especially the inclusion of the Apple Afterburner accelerator technology—firmly establish that no matter what comes next from Apple, the pro-centered end of its lineup will continue to offer a high-powered X86 option. For pro content creators, this is a very good thing.

Amazon’s Inspiration

For the past few days, I’ve been attending a conference called ReMars. One can say it is organized by Amazon, but what makes this conference different from the many I attend every year is that it is not all about Amazon. Jeff Bezos and his team have been organizing a conference called MARS for years now. It’s helpful to know what MARS stands for, which is Machine Learning, Automation, Robotics, and Space. These are essentially the core categories you can expect to learn something about by attending MARS. MARS has been for Amazon a source of inspiration and a hope to inspire others by having guests, who are generally acclaimed as the top scientists, academics, or entrepreneurs in their fields, to share ideas and some of the ground breaking work they are doing. MARS has always been only open to a very select group of people, but this year, Amazon did a very Amazony thing and scaled the conference.

While it was still invite only, the group expanded to several hundred but still had a goal of being intimate, communal, and bringing the best and brightest minds together to share ideas, challenges, and inspire one another. As I said, what made this event so different is how it was organized by Amazon but not about Amazon. The themes were still focused on Machine Learning, Automation, Robotics, and Space, and the best analogy I can make is ReMars is somewhat like TED but focused just on the MARS themes.

Even though this conference is not about Amazon, the same way that other company events I attend are all about their products and their announcements, Amazon did fit in their own learnings about Machine Learning, Automation, and Robotics and gave key executives some air time to share what they learned and what excites them about the problems they are setting out to solve. And yes a few announcements snuck in like Alexa getting a much more conversational interface and the official reveal of the Prime Air Drone and a go to market for Prime Air delivery to customers. in a few months. The rest of the talks taught us about how machine learning is being used in biochemistry to help solve health problems in bioscience. We learned about how far robotics has come and what major breakthroughs have led Boston Robotics to have drones that can walk and jump and run and scale objects very much like a human. We also learned the behavioral science around humans interacting with robots and the ways humans treat robots tell us very much about our humanity at its core.

I’ve appreciated the thought provoking sessions and wanted to share a few highlight takeaways.

  • Humans and Robots living and working together. Until you experience how many robots Amazon has built and has running in their warehouses, you can’t appreciate what Amazon executives call a symphony of humans and robots working together. On the show floor, and at the conference, Amazon displayed the many robots they have designed and built to help them automate their warehouse work as much as possible. We heard a story about a factory in Japan where human workers show up to work and do some stretching and body warm up exercises and the robots they work with are programmed to do the same exercises. This has a single goal in mind, which is to help the workers feel more at peace and connected to their robot working companions. Since much of this particular collaboration between humans and robots is in warehouses and not in public, we don’t see this dynamic, but we are rapidly approaching this idea of humans and robots in an active community. So the question came up as to how we humans should think about these robots? Are they peers and colleagues, or essentially on the same level of humans or something else? MIT Researcher Kate Darling offered a profound observation and way we could think about robots. Through the years, she explained, humans have lived communally with animals in a working relationship as well as a companion relationship. So perhaps it is best if we perceive robots in a similar way we perceive animals. Fascinating, and worth a good think.
  • AI May Really Help Solve Some of the World’s Greatest Problems. Yes, we hear this line, and it feels cliche at this point, but many of the worlds top minds in the field of AI truly believe this. We heard examples of how machines trained on computer vision to detect tumors were doing better jobs of predicting anomalies and specific treatments than expert physicians. Or how these machines could predict with greater accuracy the severity of an injury. We learned how the cost of manufacturing a new drug for an illness has gone from $100m to over $2.5 due to many more failures in the trial and error process to end up with a winning compound. AI seems well positioned to run simulations of these compounds and help bioscientists narrow the field of potential compounds and then test to see their effect. I’ve believed AI will be one of the, if not the most, transformational technology many of us will ever witness in our lifetime, and I believe it even more now.
  • One Observation about Amazon. Yes, this was not a conference about Amazon, but there is an interesting Amazon observation to be made. In the keynotes, Amazon employees gave us we heard about Amazon’s robotics strategy and what they learned, solving challenges in automation with robotics. We learned about how Amazon’s AI models are helping to make shopping on Amazon, or using Amazon services more relevant and personal and provide a better customer experience. We learned about how Amazon created their Just Walk Out retail technology showcased at Amazon Go stores and more. And what hit me, was that while Amazon wasn’t here to pitch AWS to the world, Amazon, as a company is the first best customer of AWS. With this viewpoint, Amazon has then built AWS on the back of the learnings of a company as good as scaling technology as anyone and across many industry disciplines. These learnings, and the solutions their learnings led to invent, put them in a position to then offer these solutions to hard problems as a part of AWS to other customers. AWS was built out of Amazon needing a solution to solve their problems and then became a platform to help other people solve theirs. Now those tools included in AWS include deep expertise in machine learning, computer vision, automation, and more. I’ve long felt competing with Amazon in key areas like retail and commerce will be very difficult, and I believe that even more now.

At ReMARS, Amazon hopes that invitees are inspired by the work being done by the speakers at the sessions but also that it inspires them as well to keep charging forward and inventing the future.

Apple’s Chinese Protection During The Trade War

I have been talking to an executive in China who tracks the supply chain. I asked for his views on Apple’s manufacturing exposure during the trade wars and whether China is reluctant to place any significant tariffs or restrictions on Apple.

Most people don’t know that Apple, through their multiple Chinese factories, have over 1 million Chinese workers making their products. This is an important fact. This executive told me that China knows it has to tread lightly when it comes to Apple. As I pointed out in last week’s Think.Tank column, most PC and Smartphone vendors are already looking to move at least US bound products to manufacturing centers outside of China as soon as they can.

Most of the PC vendors are moving in this direction. However, we have heard very little about whether Apple plans to move some manufacturing out of China or if they feel somewhat protected given the large amount of Chinese workers Apple employs. The US also has to be careful with Apple as its products are still in high demand world wide and Apple employs 304K around the world and 47K in the US.

China is about to release a list of US companies that Chinese companies cannot do business with but Apple will not be on that list. For both China and the US, Apple is highly strategic to their economies. This should protect them at this time. Even Apple’s biggest competitor in China is against any moves that would hobble Apple in any way in China. Huawei CEO, Ren Zhengfei said of the call to ban Apple, “That will not happen, first of all. And second of all, if that happens, I’ll be the first to protest.”

However, the executive I spoke with, who lives in China, says that even though Apple’s products are still viewed as the best in the market, there is real backlash coming from some Chinese consumers who are being pressured to only buy Chinese made products. Even worse is a trend being seen from some who have Apple iPhones and Apple Watches who are being shamed by friends who feel China is being persecuted by the US tariffs and make them feel “embarrassed” for owning an Apple product.

Ironically, these “ shamer’s” are pushing people to buy Huawei’s smartphones instead and last week, there were reported cases that Huawei phones are dying by the scores in Hong Kong and throughout China.

At the moment, I see a consumer backlash as the biggest threat to Apple in China Unfortunately for Apple, this is totally out of their control. While I do see some protection from them on the tariff side from China and the US, the anti-Apple movement could pick up steam and impact their sales in China.

Let’s hope that will not happen but it is something that Apple may need to attack with a marking program in China if this shaming push gains more traction.

Apple WWDC: Two Non-Announcements that Made News

Since Apple’s WWDC keynote on Monday, it has been fascinating to see how people reacted to two things in particular: the death of iTunes and mouse support for iPad. In a way these were non-announcements as neither of them was announced on stage and I put them together because in my mind these two are deeply rooted in legacy workflows and it seems that people’s feelings about both are quite polarized. On iTunes there is a camp feeling nostalgic and sad to see it go and one that wished its death was called much earlier. On mouse support for iPad there is a camp rejoicing for it as it brings the iPad closer to deliver a PC experience and there is one that sees this addition as a step back in allowing for new touch-first workflows. So which camp am I in?

iTunes Can Now Rest In Peace

The iTunes brands sure meant a lot for my generation. iTunes, and of course iPod, were the door into digital music so I can understand why so much of the WWDC press coverage focused on this. It is the end of an era, and the steps Apple took to transition the iTunes functionality reflect precisely where we are with content consumption.

iTunes has felt tired for a long time, and it also felt like it was trying to be too many things at once. Apple itself made fun of this second point on stage, announcing that they were adding calendar and email support to it before unveiling what they were really doing: splitting its functionality. For content, we now have three distinct apps: Apple Music, Apple TV, and Podcasts. These are the same apps we use on iOS, so it just makes sense to have consistency with the Mac, especially given the efforts around Project Catalyst, aka Marzipan. The change also reflects that music is no longer the only digital content people consume regularly. Personal content users had is not suffering from this change either, and it will be automatically transitioned in the apps that match the content type, so your music library will be in Apple Music and your movies in Apple TV.

iTunes was also how consumers synced their content and performed, but even this has changed for many consumers. If you have embraced iCloud, you have had little need for iTunes already. But if you have not transitioned to the cloud, you can still perform these tasks via Finder, which, if you think about it, is a much more logical place for this task.

Digital content has moved on, and so have we. I am thankful for the service iTunes provided, but I am glad to let it go.

To Mouse or not to Mouse, this is the Question!

This non-announcement is a little more complicated. At no point on stage, Apple referred to mouse support as a feature for iPad. Instead, Apple talked about an enhanced touch experience that would improve editing on iPad something that many users had been asking for. As someone who uses an iPad Pro as my primary computing device when I travel I can attest to how painful editing text can be. Apple also announced Desktop-browsing support for Safari, which basically means that while in iOS sites were defaulting to a mobile version and users could force the desktop version now iPadOS will be set up in the opposite way. This is a step that will improve workflows quite significantly for users.

So how do we get to mouse support? A developer noticed an accessibility feature in iOS13 that delivers Assistive Touch and the mouse target that replicates a finger touch and can navigate using Bluetooth and USB mice. This feature was already available in previous generations but has been optimized. iPad also got the newly introduced USB support for external storage, so I do wonder if the USB mouse support is part of the same release.

Of course, you can be a skeptic and say that Apple buried the feature not to admit that their stand on iPad and mouse has changed. When I look at some of the videos people, have posted on how this new mouse feature works and, more interestingly, when I read comments from regular users I can’t help but think that at least this version of mouse support is really what it says on the label: an accessibility feature. I say that because it is clearly not designed to replicate the traditional use of a mouse. Users will try and use it in that capacity, but I would guess the experience will be subpar to what it could be if Apple really decided to give iPad a mouse. It is not the first time that Apple changes its mind and markets the U-turn in such a way that you think it was always planned that way. I just don’t believe this is the case in this occasion.

From the short demo I had of the new gestures, it seemed to me that a lot of my pain points were addressed but I was curious to hear what people who downloaded the developer beta thought of it in comparison to this accessibility feature. One comment was particularly interesting to me:

Owen talks about the gestures being useful if you are typing on glass while the accessibility feature makes a difference when using a keyboard with your iPad. I find this interesting because it seems to align with Apple’s position on touch on the Mac. Apple has always said they do not believe in vertical touch. If your hands are on the keyboard, it is unnatural to reach out and touch the screen. I used to share that conviction when my primary devices were Macs and larger PCs, but I have come to use touch and keyboard a lot with my iPad because that is what I learned to do on my Surface. The reason why this is more natural to me than with a larger PCs is simply that both Surface Pro and iPad tend to be much closer to me than a regular PC making lifting my hand to touch the screen much more comfortable. As a matter of fact, I do not use the touchpad on the Surface Keyboard as much as I use it on a traditional notebook.

We are all a little different, and our workflows are all unique, even when we use the same apps. For me, what it really should boil down to, and I said this before, is whether the device fits your workflow. I admitted to my editing and browser pain with iPad Pro, pain that I endure because the return I get from being able to do everything I want with one device is enough of a driver for me. Am I happy that Apple is addressing my pain points with improved touch? Absolutely! Do I want a mouse? No, I don’t because if it were such a crucial part of my workflow, and the same goes for the keyboard, I would carry a Mac.

If you still think you cannot do real work on an iPad because of the lack of mouse support, I don’t think the iPad fits your workflows, and that’s ok. If you want to try and use the iPad and feel that you are compromising on user experience in such a way that the pain is more than the reward, then the iPad is also not for you, and that’s ok too. With Project Catalyst, it could be that some users for whom keyboard and mouse are essential might find that iOS-like apps on a Mac bring them closer to an iPad experience without compromising on their core needs. This is the beauty of being able to choose a tool that best fits your workflow, not the other way around.

 

 

 

 

 

 

 

Apple WWDC: The Privacy Foundation, and Moving Software Platforms Forward

As I attended Apple’s is main developer event yesterday, a picked up on something related to privacy I had not thought about before. Yes, there were great announcements like Sign on With Apple, or hidden email proxies, or the ability to limit further app background location tracking as examples. More interestingly, Apple is building a privacy firewall not just around its devices but truly building firewalls around their customers.

But what struck me that I had not thought of before was how the last few years of Apple making privacy a central theme has been about building their credibility as a whole by laying a foundation of privacy-centric technologies. Each year, more and more solutions from Apple with a focus on privacy emerged, and I firmly believe the goal was credibility.

Now, you may say, Apple’s customers already trust and believe Apple is credible. It is true that a segment of their customer base would already feel Apple credible and trust them with their privacy, but then there are many more consumers who may be more indifferent, or just didn’t think about their privacy that much. For those humans, Apple wants to make sure they are seen and known as a trusted entity on privacy-focused platforms, software, and services. You may not think Apple was in a position to still have to earn the trust of their customers, but there is likely a larger group than one thinks of people for whom earning that trust is still necessary.

The last few years Apple spent a lot of time and energy doing things to earn the trust of their customers and be seen as a company who is on a mission to protect their privacy not just from Apple but from others as well. Where things get interesting is what, then, Apple can do once that foundation is laid, and people know they can trust Apple with their privacy. Enter the new Find My App.

It went largely under the radar, but Apple’s Find My app is fascinating in execution. If you are not family, Find My is a service that goes one step further than Find my iPhone. Apple went out to solve the problem of locating your Apple device when it is turned off. What they came up with is Find My, and when you understand how it works, you then understand it could have only been made by a company with a credibility foundation of trust around privacy.

When you need to find an offline device, you can use the Find My app, and it will have the offline device act as a Bluetooth beacon, and any other Apple device in the area will return back the location of your offline product. So, basically, you left your Mac or iPad at the office, and it is offline, for whatever reason, it will send out a secure Bluetooth signal and any other Apple devices in the area (even if they belong to someone else) will send a signal back to you with the location of your device. The surrounding iOS devices, and their owners, never know they are assisting you in finding your device.

What’s crazy about this is that some random strangers Apple device is going to help you find your device if need be. If this wasn’t coming from Apple, it would seem awfully creepy. In fact, had Apple not built a foundation of privacy credibility, I don’t think they could have released this feature. It’s only because Apple’s customers now believe Apple is not going to track or steal their location and use it for malicious reasons in this solution that people will be Ok with letting the service use their nearby Apple product to help you find yours. This is the crux of the matter. Only because Apple has credibility in the area of privacy can a feature like Find My become possible.

Looking forward, the question then becomes, now that Apple has this foundation of credibility around privacy what other new features or services can they release that would otherwise be creepy or intrusive if they were coming from any other company? I expect Apple has many more clever solutions up their sleeve that had we not believed them credible as protectors of our privacy would not have been possible.

Moving the Software Platforms Forward
Tim Cook at a quote as he was closing the keynote where we said we have moved each of our software platforms forward. At a thousand foot view, I think this is the takeaway that matters.

People will be tempted to look at all of Apple’s software announcements and feel they are simply iterative. But iteration is progress, and iteration moves things forward. Many years of iteration can look like brand new things over time. Apple released a wide range of features that, if I can add a word to Tim Cook’s statement, meaningfully move their software platforms forward.

While iOS and macOS had what I would consider many meaningful new features, I think iPad got the most meaningful update of them all and its worth a few minutes to point out why.

Year after year, Apple has been addressing the main issues with iPad that stood in the way of many iPad owners to comfortably move more of their workflows from their Mac to iPad. I am one of those people who would love to use the iPad for as many as my workflows as possible but could not go to the iPad full time. With the newest features coming to iPad, and now its true platform name in iPadOS, Apple has eliminated many more of the reasons people cite for not being able to move from their Mac or PC to iPad.

One of the main ones being desktop-class browsing. Especially in many corporate workflows, web-based software is extremely common. A great many web-based services used by corporations, or small business, etc., don’t use apps but instead use browser-based software solutions. Even though iPad could request a desktop website, it still was not the same Safari browser as on the Mac and many web-apps either did not work or just did not work properly.

In my own workflows, both WordPress itself (which I prefer the browser to the app because it is more functional), and the web-based solution I use to send out the subscriber email, does not run well on iPad Safari. We at Creative Strategies, use a service called Infogram as our data visualization solution. Infogram does not function on iPad Safari. I have a number of examples, and I’m sure others have many more, but the bottom line was iPad Safari was not a desktop-class browser. With iPadOS, iPad gets a true desktop-class browser and its one of those things that seem small but is a big deal.

Multitasking took a huge step forward as well. Running the same two apps side by side is something I know iPad Pro users have been requesting forever, especially being able to run two-word docs or two excel files side by side. The increased touch-based text editing is something I’m excited to try as well. My hope is that Apple keeps investing in making touch mightier than the mouse. Meaning, make a natural interface like touch and our finger a more efficient way to manage and edit text than a mouse ever was. I think Apple is getting close, but I’d like to decide for myself by trying it out.

While there is still more to analyze with regard to WWDC, I believe this event was one of the most meaningful in totality for all of Apple’s platforms.

Apple Blurs Lines Across Devices

Is the iPad really a computer? Or is it a computing accessory?

That’s a question that’s triggered enormous amounts of debate and discussion for many years now, and regardless of where you stand on the issue, it’s one that has never really been definitively answered. At yesterday’s WWDC, however, Apple certainly took some big steps towards an affirmative answer with the launch of iPadOS and all the latest enhancements it entails.

Most importantly, the introduction of a true file system and the Files app to access it puts the iPad in a similar class to other “full” computers. Though Steve Jobs may be rolling in his grave because of it (he was notoriously averse to anything like a visible file system for the iPad), it’s something that the iPad has desperately needed for those who want to run computer-esque productivity apps and do enterprise-like work on the device. It turns out storing, finding, and organizing files on local storage—and having easy access to external storage (like USB sticks!)—is an absolutely essential part of those types of efforts. Without it, the iPad was severely handicapped; with it, it’s time to give a fresh look to the concept of tablet computing.

In addition, though Apple didn’t talk about it, the forthcoming version of iPadOS (expected this fall along with iOS13, tvOS13, WatchOS 6 and MacOS Catalina) also includes support for Bluetooth mice. The feature is currently hidden in some accessibility settings, but it’s hard to imagine Apple keeping it there for too long, as the lack of mouse and true cursor support has been a lingering concern around iPad computing for some time as well. Now that the secret’s out, serious iPad users will be clamoring for it.

But Apple’s blurring of device lines wasn’t limited to iPads becoming more computer-like. There were also several introductions that highlighted how the iPad can become a more useful computer accessory. Most notable of these was the debut of the new Sidecar feature in MacOS Catalina that will let you use an iPad as a secondary monitor for your Mac. While there are certainly cheaper options for dedicated monitors, the ability to let you use your iPad as a secondary display on an occasional (or even regular) basis is something that many Mac users will undoubtedly find very useful. In an age of increased multitasking, there’s never enough screen real estate, so options to extend your desktop and apps across multiple screens make a great deal of sense.

Interestingly, because Sidecar also supports Apple Pencil on the connected iPad, it’s almost like bringing some level of touch-screen support to the Mac. To be clear, it only works with Mac apps that currently support stylus input (think graphics apps), but it can add a Touch Bar, even to Macs that currently don’t have them, and will likely lead to other touch-enabled features.

Another critical iPad to Mac benefit is that the company’s new Project Catalyst, which allows developers to easily move some of the 1 million apps designed for iPads over to the Mac. Apple said it used the technology to move some of its own apps, such as Apple News, Stocks, Home, etc., over to MacOS, and with Project Catalyst, they’re opening up the same capability to developers who use the company’s Xcode development tools. Given the relative dearth of new Mac applications, this is a critical step for the ongoing life of the Mac platform.

What’s interesting about all these developments is that Apple is taking a new approach to its various product categories that seems less concerned with potential overlap and more concerned with providing the best advances possible for each. In other words, in the past, Apple appeared to be very conscious of the potential confusion that could be created in understanding what an iPad could do (and how it could be done) versus what a Mac could do. Hence, there was much more separation between the iPad and the Mac in terms of capability and functionality.

As device usage trends, product category sales trends, the impact of the cloud, and several other realities of the modern digital world have evolved, however, Apple seems to be much less worried about defining the categories (and limits) of each of its devices. Instead, these new announcements suggest that they want to leverage whatever resources they can to make the iPad experience as good as it can be and the Mac experience as good as it can be. This new approach towards the realities of our multi-device world may create some confusion among some people about what device to buy, or which one to use for certain applications or in certain situations. In the long run, however, it seems to be a much healthier perspective that allows people to get the most out of whatever individual devices, or combination of devices, they happen to have access to.

From an overall perspective, these developments are particularly important for the Mac, which has certainly seemed to be Apple’s abandoned stepchild for quite some time now. In conjunction with the impressive-looking new Intel and AMD-powered Mac Pro also introduced at WWDC, however, it’s clear that Apple is providing some much-needed love to its first platform device.

If Apple really wants to get serious about letting people use their products and services across the reality of today’s complex multi-device world, they’re going to have to do a lot more work in getting some of their devices (like Apple Watch), applications, and services to work across other non-Apple platforms (as I’m sure they’ll eventually do). In the meantime, however, these new announcements show that Apple is becoming the kind of company that’s perfectly comfortable with embracing the uncertainty and blurriness of today’s digital product categories.

Reading the WWDC Tea Leaves

With Apple’s main developer event coming Monday, I wanted to share some thoughts on why I think this year will be significant for Apple and developers.

There is a point about Apple’s hardware and software platforms that I think gets overlooked. There is no single company with as significant market share in many personal computing categories as Apple. You will not find a single company who has meaningful market share in PCs, smartphones, tablets, and wearables other than Apple. Other companies have meaningful market share in or two, but none have as much device reach as Apple across different categories.

This is relevant because developers play a key role in the secret sauce of Apple’s success. Apps make platforms more compelling, and app innovation is important to keep platforms from going stale. This is where my point of Apple’s hardware reach across categories becomes relevant. Apple has always talked about their different platforms. Mac is its own platform, Watch is its own platform, iPad is its own platform, and iPhone is its own platform. What has intrigued me about Marzipan, and the development tools that will accompany it is how it has the potential for Apple to create THE platform.

Marzipan’s progress will be the thing I’m watching most to see how far it has come and what apps Apple uses as examples along with what tools are highlighted for developers. The current apps Apple has on macOS like Stocks and News have become some of my most used daily apps on macOS. This gives me hope that as developers can move their iOS apps to Mac that it will ignite a new fire of software development for the Mac platform.

The reality, however, is that other than the iPhone, unique app development at scale has not happened for Apple’s platforms. You can argue there is a critical mass of iPad apps that are designed for iPad, and that is true to a degree, but nothing is like the scale of iOS. Similarly, there is very little third-party app push or innovation on Apple Watch. Despite a rather sizable installed base for Watch. Every time we survey and talk to Watch owners, it becomes clear there is not a lot of interest in third-party apps yet, and most people only use Apple’s first-party apps. I still believe Apple Watch remains an exciting platform for developers, but we have not even scratched the surface there.

Marzipan, and the tools Apple gives to developers to unify the platform have the potential to bring more developers (easily) to some of these underserved other Apple hardware like macOS and Watch but it will also help iPad and possibly even AppleTV. While it is important to note macOS has not been totally void of software development the reality is macOS development is still a niche for third parties, and if Apple can get thousands of new developers and apps to macOS it will be a significant boon for the platform.

Marzipan and the unification of the platform has the potential, for the first time, for us to see a company truly create one unified platform across hardware categories. Given the reach of Apple’s hardware I mentioned, this is one of those only Apple situations Tim Cook likes to mention. I’m optimistic.

Can Siri Move Forward?
While Marzipan and its progress seem a given, the area that is a big question for me is Siri and at large Apple’s broader machine learning efforts across their categories of hardware and software. With Apple’s hire of Google’s AI chief John Giannandrea, I hope we see some progress and improvement as it relates to Siri as a platform and how Apple’s devices and the platform is getting smarter and more personal overall for customers.

I’ve long said Apple has architected machine learning into its core iOS so that iOS adapts to its owner and becomes even more personal as an experience over time. Google does this as well, and as far as both platforms are concerned, the device starting to become more helpful and more personal is the battle we are watching play out. Google is marketing its Pixel phones as a more helpful smartphone, and Apple has an opportunity to make progress in this area and create an even deeper value for its customers.

Here again, I think Apple device reach across categories is relevant. Google cannot claim significant market share in PCs (yet), tablets, or wearables but only in smartphones. Which means Google can market a more helpful smartphone but not necessarily a more helpful platform. This is where I think Apple has the most potential to be unique and create more compelling experiences for their users.

While Siri may never be better than Google Assistant, Google Assistant may never have the reach of Siri or Apple’s platforms across devices. So while Google can market the more helpful phone, Apple has the potential to market the more helpful platform.

Again, for Apple, the inflection point from the past 10 years to the next 10 years could be the unification of the platform. If they can execute on this, they remain well positioned for the future.

Major Changes Coming to the Cable-Wireless Relationship

Cable’s third, but still nascent foray into the wireless business will undergo major change over the next couple of years, driven by M&A, the rollout of 5G, and new competition in broadband from fixed wireless. Three ‘events’ are cause to revisit cable’s prospects in mobile sector: the announcement, on May 20, that New T-Mobile would continue the MVNO relationship that Sprint has with Altice (if the Sprint deal goes through), plus extend ‘fair terms’ for 5G; rumors about Altice launching its MVNO service at significantly discounted prices; and the impending rollout of 5G, and, over time, fixed wireless services by Verizon and, potentially, New T-Mobile.

Question #1 is whether the cable MVNO effort has been successful to date. In a nutshell, ‘sorta’. Xfinity Mobile, which launched about two years ago, claims 1.4 million subscribers, which is about 5% of their base of broadband customers. Spectrum Mobile (Charter’s offering) launched nearly a year ago and has 340,000 subscribers, which is about 1.5% of its broadband base. Those are real numbers, but they aren’t moving the revenue needle at their parent companies, nor has  cable taken measurable share from wireless. Those signing up for cable MVNO services are trending toward the price-conscious, in that the majority of subs are choosing the ‘pay per GB’ plan.

So, the three largest cable companies are in wireless, but they’re not all in. None has invested in deploying their own infrastructure, nor have they acquired spectrum in recent auctions. I don’t think they’re in wireless because they see huge potential profits in that business (Verizon’s wholesale terms make that very tough). They’re in it because they need to keep a toe in, given larger industry dynamics, and because of some modest retention benefits to their broadband base.

But there are three major developments on the horizon that will force a change in cable’s wireless strategy. The first change depends on what happens with the T-Mobile-Sprint deal. If it goes through, as I still believe it will, decent terms will have to be extended to Altice, including 5G. This will be part of any concessions offered. So, Altice will be able to offer a price-competitive wireless service, bolstered by its growing network of Wi-Fi hotspots. Altice’s infrastructure will prove to be an even more critical asset to New T-Mobile, as they race to build out 5G services leveraging both the 600 MHz spectrum and Sprint’s 2.5 GHz network. New T-Mobile will be a major player in 5G, and has promised it would offer residential broadband service to 50% of homes at prices below today’s typical broadband.

Second, the 5G rollout that will occur steadily over the next couple of years will alter the equation for cable. It is not clear whether the Verizon MVNO contract with Comcast and Charter includes 5G. If, as we believe, it does not, their mobile offerings would start to be at a disadvantage, especially once a compelling array of 5G phones (such as a 2020 iPhone) becomes available and as 5G coverage hits some critical mass.

The third potential game-changer is fixed wireless. As Verizon rolls out fixed wireless to more cities beginning later this year, it will start competing more directly with cable companies in the broadband business. This dynamic does not auger well for the MVNO relationship, considering especially the fact that a major motivation for cable’s wireless initiative is to boost retention of its broadband customers. It gets sticky for New T-Mobile over time, as well. Yes, they must extend fair MVNO terms to Altice for the foreseeable future, yet their planed home broadband might be competing directly with Altice in a handful of markets.

The implications are that cable will be forced to revisit its wireless strategy in the not too distant future. If they want to get to the next level of growth, cable will have to reduce its dependency on a purely wholesale relationship for their mobile offering. Fortunately, some viable options are presenting themselves, and at just the right time. Rather than a choice of one option or another, it’s more of a ‘cocktail’, consisting of the following:

  • CBRS. This is the 3.5 GHz shared spectrum service that will be launching later this year. This could be a lower-cost, lower-risk way for cable to reduce its dependency on wholesale arrangements and complement its Wi-Fi offerings. MulteFire is another option on the menu, but is more of a wildcard.
  • Mid-Band Spectrum. If cable companies were to ever bid on spectrum, the 3.7-4.2 band that the FCC will likely auction is the best ‘fit’ for cable. It would also fit well with any planned CBRS initiatives
  • Wi-Fi and small cells. These worlds are converging (see my recent Wi-Fi roadmap piece). Wi-Fi 6 (802.11ax) improves Wi-Fi speed, range, and reduces interference. It also helps address the ‘Wi-Fi Purgatory’ issue, which in my view has been a major damper on the Cable Wi-Fi experience. One could also see cable companies complementing their Wi-Fi networks with strategic deployments of small cells, as Charter has indicated it might do.
  • DISH. As always, DISH remains a wildcard. If it ends up deploying some form of wholesale 4G/5G network, that could be a game changer as far as MVNO relationships are concerned. But as with most things in the cable/telco/mobile/internet landscape, it’s complicated, since cable remains DISH’s principal competitor on the pay TV side, which could certainly affect its appetite to host cable companies as wholesale customers.

The current state of affairs in mobile and broadband is in a sort of equilibrium that will not last beyond 2019, as cable dipping its toes further into cellular, and cellular starts to dip its toes into broadband (cable). Over time, we all realize that fixed and mobile networks will converge, and that the customer, circa 2022-2025, might well not have separate fixed and mobile subscriptions (they’ll be having to spend that extra money on their 20-odd streaming TV services).

But for wireless to be anything more than a rounding error in the cable companies’ business, they’ll have to make a more substantial physical investment than they’ve been willing to undertake to date.

The Coming Era of Technology Foreign Policy

I find it interesting that two of the biggest themes happening in the tech world in 2019, which will carry well beyond 2019, have to do with government involvement and politics. The themes I’m referencing are trade and regulation. I’ve talked about regulation and will continue to cover that through the year, but I want to go deeper on the trade issue and how I see a broader technology schism emerging.

In my article on the China-US 5G space race, I alluded to the potential of two very different Internet ecosystems in the West and East. Having spoken with more technology executives and investors since that article, I think what is happening is more a broad technology schism that goes beyond the Internet.

The US and China trade war is accelerating China’s efforts for complete verticalization and thus control of their technology future. This, however, at least for the moment, is a strategy that largely works in China. There has been some interesting debate around the potential for a third operating system, and one Huawei will need to make, that is a fork of Android, that could succeed in many parts of the developing world as a billion or more humans still get their first computer (a.k.a smartphone).

China First
China has always been a unique technology ecosystem with the exception of Apple. Most US companies, especially software companies, don’t have much a chance to compete in China, and that trend seems likely to continue. An underlying trend to watch is the degree that China doubles down on all things China made and uses that as a new base to grow their expertise and then expand globally. I found this chart in a financial report I read, which highlights a few core buckets China is focusing on homegrown technology.

As China’s economic stimulus extends deeper into all areas of technology, with a focus on verticalization, it seems likely it will become even more difficult for foreign companies to compete in China. This potential for China to essentially pull off complete vertical technological implementation would be the first time we see this business strategy applied to a nation and its proprietary tech at scale. China can use 5G, its network infrastructure, its proprietary software, and custom China built silicon, to truly wall off the rest of the world.

Beyond China
Carrying this scenario out logically, China will then want to continue taking this technology to other parts of the world. This is where the effort of Huawei to take an Android fork globally will be interesting to watch. There are markets where this product will be dead on arrival, and most of Europe and India are key ones. But the African continent is a wide open field, and one Huawei has had its eye on both from network infrastructure and smartphones.

Developing countries in Africa, and perhaps some of the surrounding countries in East Asia, are the only ones I can see an Android fork potentially working. What I think is interesting is the potential is for an integrated solution of aa China created 5G network technology, China Android Fork, and China smartphone hardware as a strategy for global expansion. This brings up some interesting questions globally when we consider technology creating becoming more easily accessible and countries national security.

The Technological Game of RISK
The last point I want to make here and consider this more food for thought, is what happens if more countries catch onto China’s trend of full stack technological verticalization. For example, India is among one of the fastest growing technology markets. Will India want a full stack Chinese solution to start to take a share of their local market? Or will India want to start to verticalize on their own? There is a global game of RISK happening around technology, and some forward-thinking politicians and policymakers need to understand how the decisions being made today will impact their future.

The bigger question countries have to start considering seriously is their technology policy as it relates to national security. At the moment, we have no real evidence, only insinuation, that Huawei is a national security risk. However, whether they are or not, does not discount the concern they could, at some point, become more a risk. Does India want China solutions so profoundly integrated into their countries technological ecosystem that at some point, it’s too late from a national security standpoint?

While I’m just brainstorming out loud, you have to wonder the implications of countries starting to invest more in their own vertical solutions and how that creates more walls and not less from a technology standpoint. Foreign countries cannot just build military bases at will in other countries backyards, and as technology can be seen as potential for cyberwar, you have to wonder if more governments will form stronger foreign policy when it comes to IP and core technology entering their country.

Again, I’m just talking out loud, but I think the tech industry needs to start thinking long and hard about all the potential scenarios, both good and bad, that may come from the US-China trade war.

PC Makers New Strategy for Getting Around Level Four Tariffs

Early last week, we got a look at the proposed level 4 tariffs that Trump and the US government could place on thousands of products, including PC’s, laptops, notebooks and some smartphones manufactured in China.

Most PC and device makers had thought we would never get to level 4 tariffs, but the trade war with China is intensifying. Tim Cook and other tech execs have been lobbying President Trump for over a year and explained to him how tariffs on PC, CE and Smartphones could impact their companies and industries.

However, Trump and his advisors seem to be willing to apply these new round of tariffs regardless of their impact on major tech companies and consumers. They have not heeded the requests of the tech CEO’s who have told them that it would raise prices on their products and in turn, impact consumer buying and hurt their bottom lines. Unless there is a last minute change in the trade war talks, the level 4 tariffs could go into effect later this summer.

These tariffs leave all of the PC vendors scrambling to try and find a way to avoid these tariffs if at all possible. Unfortunately, since most did not believe we would get to this place, moving to manufacture out of China fast is a real challenge.

In talking to ODM’s and OEM’s, it appears that a manufacturing strategy is emerging that could help in the short term and, over time, allow them to move a lot of their manufacturing out of China. A few OEM’s were proactive and have already worked with their OEM’s to move at least part of their products to countries like Viet Nam, Indonesia, Malaysia, India, and even Mexico.

The problem is that most of the ODM’s shuttered their factories in many of these countries when they moved a majority of their manufacturing to China. Now they too are scrambling to try and get their factories in these countries back online to handle some of the production of products targeted for the US market.

Indeed, this is the first step for most of the OEM’s to deal with these potential upcoming tariffs. Many PC companies are either trying to move actual manufacturing of US-bound laptops and notebooks to countries outside of China or at the very least, do final assembly in these countries, which would allow them to ship from there and avoid the Tariffs placed on products manufactured out of China.

A move to manufacture products outside of China will be a slow and challenging process, but from talking with OEM’s, they no longer believe they have a choice. Even if the level 4 tariffs can be avoided, they do not trust this or even future government’s in dealing with China trade issues and now believe that regardless of the outcome of the trade wars, they most likely can’t trust that putting all of their manufacturing eggs in a China basket is feasible.

The initial strategy to move US-bound products to manufacturing plants outside of China is an essential start to this transition. But this will take time, and some products may still be subject to these tariffs throughout this year as moving to manufacture out of China really can’t accelerate until the second quarter of 2020, according to my sources in Taiwan.

Moving even some manufacturing out of China could have dire consequences for China, though. A key to China’s current economic boom is that about 12 years ago they began a major program to bring young people from their agricultural work roots and recruit them to work in the growing number of factories springing up in dedicated commercial zones.
Millions of youth were recruited out of what would be called poverty level farming and given a chance to work in factories. While on the farm, these youth earned about $10 a week. But in their new factory jobs, they started with salaries of about $100 a week. Making ten times more they made in the fields was transformational for most of these young people, and they helped China become the manufacturing powerhouse they are today.

However, if China’s loses even 10% of the PC, Notebook, printer and smartphone manufacturing to other countries around the world, my Chinese sources tell me it would mean a loss of at least 200,000 jobs and could even shutter some of the smaller factories in China today. This move is bound to impact China’s future growth and impact their GDP.

The trade war ramifications for the tech industry is just starting to hit home and will be forcing PC, Smartphone and many CE vendors to make some radical moves in terms of manufacturing choices throughout 2019 and 2020. There will be a real pain for them and consumers as this move out of China moves forward, and it will impact thousands of jobs in China too.

PCs Are Changing Their Spots but not Learning New Tricks

I know, I mushed up two sayings trying to convey what I think is the biggest issue with current PCs: we still do the same things we used to do ten years ago. We might do them faster, and everywhere we want rather than slow and chained to our desk, but the way we do them has not really changed.

This is Computex week, so we have seen a long list of announcements coming out of the show in Taipei over the weekend all focused on PCs. We heard about new silicon platforms from AMD, Intel, Nvidia, original designs that sport dual screens like the ZenBook Pro Duo by Asus and new materials like the new wood-paneled Envi from HP as well as integrated 5G connectivity with Qualcomm and Lenovo’s Project Limitless. All very exciting developments for a market that has seen some new life injected into it. No matter which numbers you look at, PC sales have stabilized, and ASPs are growing.

Looking Beyond Hardware: Software First…

Nobody though, is talking about software yet. Even two weeks ago, when Lenovo introduced the prototype of the foldable Thinkpad, there was no talk about software. Software is, at the end of the day, what will empower users to take advantage of all these new designs and capabilities. Giving me the options to have a foldable screen or even two screens but not changing the way the underlying software and apps work will do very little to make me feel my investment –  which I am sure for some of these devices will not be insignificant – is worthwhile.

The burden of the platform is on Microsoft, and if we want to broaden the conversation to computing in general, on Google and Apple. And it has been fascinating to me how different platforms have dealt with apps thus far. In different ways, but I feel that both Microsoft and Google never really addressed the app ecosystem problem head-on. Microsoft focused on delivering great first-party apps but did not seem to put the same effort into engaging with developers to deliver exceptional experiences beyond gaming. Google also concentrated on great first party apps and worked on a cross-platform solution to make it easier for Chromebooks to leverage apps designed for Android which more often than not leads to just ok experiences but does not take apps to their full potential. Apple is in the process to give developers the option to leverage the work they put into iOS for MacOS, given that ecosystem never grew as far and wide.

Look back at your usage of both phones and PCs over the past ten years and see how much what we do with our phones has changed compared to what we do with our PCs. We might be taking our PCs with us everywhere and even have them connected, we might have added a little touch and pen support, but what we do with them has not changed much at all.

I strongly believe that for new form factors such as foldable and dual screens both OS and apps need to be redesigned from the ground up with the intent to make our workflows richer or easier.

… Intelligence Second

From a platform perspective, I expect Microsoft, Google, and Apple to deliver more all around intelligence too. While some brands have been talking about smart PCs, the focus so far has been on smart hardware rather than an intelligent experience at a workflow level. So for instance, a PC might change its security setting based on the WiFi that you connect it to or the privacy display might turn on when the camera detects a person next to you.

Intelligence in smartphones is more and more focusing on delivering a very personal and tailored experience across all the apps that we use, and I would love to see the same applied to computing.

The ability to use AI to understand the way I use my PC through the use of Microsoft Graph or Google Assistant and have my computer present apps in such a way as to facilitate my workflow and optimize the use of computing form factor and power could be a total game changer. Think about the routines you have set up with Alexa for your connected home and then think about how powerful it would be to set up routines on your PC based on usage. From simple things like pairing the apps you use for a specific task and presenting you with a screen that has them all ready for you as you start your task. Or delivering you the information you need before you look for it, like your calendar info when trying to schedule a meeting or the translation of a passage when reading something that has a foreign text in it. Do you think this is impossible? If you consider what Microsoft Office can deliver today with the smart editor in Word and design suggestions in PowerPoint or smart replies in Gmail you see we are well on our way. I just want a systemwide approach so that intelligence can really break free.

Demanding More

As much as our smartphones have become our first go to for many of our computing needs, most of us are still pretty consistent in turning to PCs for work. We might complain they are not as sexy as our phones and lag in some functionalities that we so much love in our phones like instant-on and battery life, but overall I think we have become quite resigned to accept things the way they are.

Microsoft and Google, and to a lesser extent, Apple, have added intelligence to their apps and services but that intelligence is not yet permeating from those first-party apps like Office and G Suite to cross apps experiences. Partly this can be explained by the fact that some of those services drive revenue and therefore intelligence is used as a differentiator, but partly I think today’s limitations are driven by the fact that AI is considered as differentiation in enterprise but not in the consumer space. So an enterprise that has access to the Microsoft Graph can deliver an intelligent workflow, assuming they care about user experience, but as a consumer, I am just not given the same level of access. This makes very little sense to me given how much more engagement platforms would be driving by opening up their AI capabilities to developers in a similar way they do with APIs. I bet consumers would even pay for that as the return would be evident to them and I do wonder if Google’s learnings on Android will result in a much more intelligent solution on Chromebooks.

Unless Microsoft, Google, and Apple recognize that PCs need to catch up with smartphones in the overall experience, they deliver and not just the features they offer, consumers will continue to see smartphones as a superior computing platform even with the physical limitation of their current form factors. Failing to address these shortcomings in a timely fashion leaves the PC market open to more disruption coming from AR and VR.