HPE and Google Cloud Expand Hybrid Options

on June 18, 2019
Reading Time: 3 minutes

The range of choices that enterprises have when it comes to both locations and methods for running applications and other critical workloads continues to expand at a dizzying rate. From public cloud service providers like Amazon’s AWS and Microsoft’s Azure, to on-premise private cloud data centers, as well as traditional legacy applications, to containerized, orchestrated microservices, the range of computing options available to today’s businesses is vast.

As interesting as some of the new solutions may be, however, the selection of one versus another has often been a binary choice that necessitated complicated and expensive migrations from one location or application type to another. In fact, there are many organizations that have investigated making these kinds of transitions, but then stopped, either before they began or shortly after having started, once they realized how challenging and/or costly these efforts were going to be.

Thankfully, a variety of tech vendors have recognized that businesses are looking for more flexibility when it comes to options for modernizing their IT environments. The latest effort comes via an extension of the partnership between HPE and Google Cloud, which was first detailed at Google’s Cloud Next event in April. Combining a variety of different HPE products and services with Google Cloud’s expertise in containerized applications and the multi-cloud transportability enabled by Google’s Anthos, the two companies just announced what they call a hybrid cloud for containers.

Basically, the new service allows companies to create modern, containerized workloads either in the cloud or leveraging cloud software technologies on premise, then run those apps locally on HPE servers and storage solutions but manage them and run analytics on them in the cloud via Google Cloud. In addition, thanks to Anthos’ ability to work across multiple cloud providers, those workloads could be run on AWS or Azure (in addition to Google Cloud), or even get moved back into a business’ own on-premise data center or into a co-location facility they rent as needs and requirements change. In the third quarter of this year, HPE will also be adding support for its Cloud Volumes service, which provides a consistent storage platform that can be connected to any of the public cloud services and avoids the challenges and costs of migrating that data across different service providers.

On top of all this, HPE is going to make this offering part of their GreenLake, pay-as-you-go service consumption model. With GreenLake, companies only pay for whatever services they use—similarly to how cloud computing providers offer infrastructure as a service (IaaS). However, HPE extends what companies like Amazon do by providing a significantly wider range of partners and products that can be put together to create a finished solution. So, rather than having to simply use whatever tools someone like Amazon might provide, HPE’s GreenLake offerings can leverage existing software licenses or other legacy applications that a business may have or may already use. Ultimately, it comes down to a question of choice, with HPE focused on giving companies as much flexibility as possible.

The GreenLake offerings, which HPE rebranded about 18 months ago, are apparently the fastest growing product the company has—the partner channel portion of the business grew 275% over the last year according to the company (though obviously from a tiny base). They’ve become so important, in fact, that HPE is expected to extend GreenLake into a significantly wider range of service offerings over the next several years. In fact, in the slide describing the new HPE/Google Cloud offering, HPE used the phrase “everything as a service,” implying a very aggressive move into a more customer experience-focused set of products.

What’s particularly interesting about this latest offering from the two companies is that it’s indicative of a larger trend in IT to move away from capital-intensive hardware purchases towards a longer-term, and theoretically stickier, business model based on operating expenses. More importantly, the idea also reflects the growing expectations that IT suppliers need to become true solution providers and offer up complete experiences that businesses can easily integrate into their organizations. It’s an idea that’s been talked about for a long time now (and one that isn’t going to happen overnight), but this latest announcement from HPE and Google clearly highlights that trends seem to be moving more quickly in that direction.

From a technology perspective, the news also provides yet more evidence that for the vast majority of businesses, the future of the cloud is a hybrid one that can leverage both on-premise (or co-located) computing resources and elements of the public cloud. Companies need the flexibility to have capabilities in both worlds, to have additional choices in who manages those resources and how they’re paid for, and to have the ability to easily move back and forth between them as needs evolve. Hybrid cloud options are really the only ones that can meet this range of needs.

Overcoming the complexity of modern IT still remains a significant challenge for many organizations, but options that can increase flexibility and choice are clearly going to be important tools moving forward.

Podcast: E3, AMD, Tech Industry Regulation, Google Pixel 4 Preview

on June 15, 2019
Reading Time: 1 minute

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing the announcements from the E3 gaming show, including big GPU and CPU announcements from AMD, discussing recent comments on potential antitrust movements against major tech companies from the US government, and trying to discern Google’s strategy around leaking Pixel 4 news early.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Government’s Regulatory and Antitrust Hypocrisy

on June 14, 2019
Reading Time: 5 minutes

I’m not sure if I fall into the minority on this viewpoint, but the more I talk to folks around the tech industry about the regulatory concerns the more I’m convinced government regulation, or a break up of big tech, is not the answer. In my mind, there are two things that are low hanging fruit to discuss regarding a modern antitrust environment.

Should the Definition and Circumstances Change?
I think it is clear from recent news, and communications from the DOJ and the FTC that they are attempting to modernize what is understood as antitrust or anticompetitive behavior and no longer is market share a defining element. For those interested I highly recommend reading this article in full which is the speech of Assistant Attorney General Makan Delrahim at the Antitrust New Frontiers Conference.

When listening questions fielded to CEO’s or executives lately on whether they feel they are a monopoly, they have often used their market share as a defense. Small market share is simply no longer a defense in this new era, and instead, the conversation will shift to two areas, competition, and consumer harm.

This is essentially how Makan Delrahim states the purpose of antitrust and the core question in this line of his speech “Therefore, the right question is whether a defined market is competitive. That is the province of the antitrust laws.” Essentially all discussion going forward should be centered around this topic if a market is competitive and if any incumbents are abusing their leverage to keep stifle competition or innovation in some cases.

On this topic of both a re-orienting our understanding of antitrust in the digital era, and competitive market dynamics, I found these following points from Makan’s speech quite interesting:

Finally, the Antitrust Division does not take a myopic view of competition. Many recent calls for antitrust reform, or more radical change, are premised on the incorrect notion that antitrust policy is only concerned with keeping prices low. It is well-settled, however, that competition has price and non-price dimensions.

Price effects alone do not provide a complete picture of market dynamics, especially in digital markets in which the profit-maximizing price is zero. As the journalist, Franklin Foer recently said, “Who can complain about the price that Google is charging you? Or who can complain about Amazon’s prices; they are simply lower than the competition’s.” Harm to innovation is also an important dimension of competition that can have far-reaching effects. Consider, for example, a product that never reaches the market or is withdrawn from the market due to an unlawful acquisition. The antitrust laws should protect the competition that would be lost in that scenario as well.

If you follow executive commentary, you note that price has also been something mentioned as monopoly defense. Apple was quick to point out that 80% of apps on the App Store are free, or Amazon points out that they are generally the most competitive on prices and are working tirelessly to keep prices low. It is abundantly clear price is also no longer a defense against monopoly. This speech makes it clear the issue at hand is not Apple’s pricing per se but that there are only two app stores. While competition is alive and well, arguably, on the App Store, App Store competition itself is not alive and well.

The digital era is one of the conglomerates. There is no way around that truth, and in this era, it seems, antitrust initiatives will focus more on how said tech conglomerates use their leverage to stifle competition. This opens the door, in my opinion, to a more crucial and better competitive analysis to emerge. In particular, for many companies themselves, who I have felt for some time, have worked closely with their legal teams to get as close to the edge of antitrust behavior without crossing it. Many companies may need to rethink some of their long-term strategies in light of a much larger magnifying glass being placed on them going forward.

Correlation, Causation, and Hypocrisy
A few other observations on this matter. In this speech, one of the more interesting viewpoints used was one looking historically at antitrust pursuits and remedies as a matter of causation. For example, Makan says about Microsoft “Although Microsoft was not broken up into smaller companies, the government’s successful monopolization case against Microsoft may very well have paved the way for companies like Google, Yahoo, and Apple to enter with their own desktop and mobile products.”

Note the language “may very well have.” Most of in the industry who have studied it for a long time can say with a high degree of certainty the antitrust suit against Microsoft was absolutely not the reason Google or Apple saw success in their mobile operating systems. The danger of the governments view here is to read too much into past successful antitrust legislation and view it as an anecdote for other successes today. That discounts a vast number of other dynamics that led to other companies successes.

As we build out thinking around how companies have been operating, and more specifically using their leverage, I do think areas of collusion and exclusivity are worthwhile areas for antitrust regulators to take a deeper look at some companies. That being said, I did find it ironic that when it came to collusion, the following example was used:

The Antitrust Division may look askance at coordinated conduct that creates or enhances market power. Consider, for example, the Antitrust Division’s investigation of Yahoo! and Google’s advertising agreement in 2008. The companies entered into an agreement that would have enabled Yahoo! to replace a significant portion of its own internet search advertisements with advertisements sold by Google. The Antitrust Division’s investigation determined that the agreement, if implemented, would have harmed the markets for internet search advertising and internet search syndication where the companies accounted for over 90 percent of each market, respectively. The agreement was abandoned after the Antitrust Division informed the companies that it intended to file a lawsuit to block the implementation of the agreement.

Here again, something that seemed well intentioned may be a factor in hurting Yahoo more than helping since Yahoo has since faded into irrelevance and leaving us in the west with really only one search engine. Yahoo working with Google may have helped them prolong their life long enough to come up with something new or innovate. We will never know. The point remains regulation runs the risk of having the exact opposite of its intentions, and sadly, most of these regulators are not informed enough to play out all scenarios as a part of their decision-making process.

Furthermore, it seems odd the level of hypocrisy antitrust regulators have shown up to this point. Think about things like cable monopolies who had zero innovation, high prices, very little competition in specific regions. Banking is another area, that while it seems like customers have a choice, there has been very little innovation, terrible customer experience, high fees, and a range of things that government regulations have enabled which makes the barrier of entry often too high for many startups.

As I said at the beginning, sometimes regulation is helpful, but more often than not, especially in the digital era, I think it can be argued it has done more harm than good.

That being said, I’m glad the government will start taking a look at certain issues, however, I worry they are ill-equipped to do so in many areas, and my fear is too much overstepping in a way that ends up hurting competition, consumers, and innovation in unintended ways.

Netflix Needs to Pivot. Again

on June 14, 2019
Reading Time: 4 minutes

Of all the success stories in tech over the past 20 years, the evolution of Netflix is among the most fascinating. Few companies in history have pivoted successfully multiple times. And Netflix is now faced with a situation where they will have to evolve, yet again.

First, a quick history. Netflix, in its initial incarnation dating back to 1997, was that ‘little red envelope’ company. During the time of peak video store/Blockbuster, Netflix created a successful subscription business of movie/TV series rentals by mail. In that pre-broadband/pre-streaming period, Netflix was that era’s Grubhub and DoorDash. And for those who lived a distance from the nearest video store, Netflix was a godsend.

Then, broadband arrived, and with it, services such as iTunes for movie purchases and rentals online. Over a period of about ten years, video stores essentially disappeared, with Blockbuster being among the biggest casualties. But Netflix successfully pivoted here, maintaining its legacy mail order business while simultaneously building a success streaming video business during the 2000s. Their ability to do this on a global basis was somewhat dependent on the availability of decent quality broadband to homes.

Then, yet again, Netflix saw the writing on the wall and engineered a second successful pivot. Competing streaming services had emerged, and on-demand capabilities and libraries from cable companies and other providers continued to grow. This meant that much of the content available on Netflix was also available through other sources. So Netflix made a bet on original content, beginning with House of Cards. Hard to imagine that was only in 2013. During the past six years, Netflix has plowed billions into original content, producing – literally– thousands of scripted shows and movies, globally. Today, Netflix is effectively in two businesses: people still subscribe to Netflix for its vast library and terrific UI; but increasingly, Netflix is another content channel, just like HBO, Showtime, or Hulu. In an era with plenty of competition in streaming, Netflix has continued to grow quickly and have incredible subscriber retention.

Which brings us to this moment, and the need for Pivot #3. Another exogenous market development has the potential to upend Netflix’s business. The slew of media M&A that has occurred over the past couple of years, and the imminent launch of new streaming services from Disney, Warner Media (AT&T), Apple, and others, will have a dual impact on Netflix: the first is a much larger number of streaming options competing for the consumer’s dollar; and the second is a dramatically altered content landscape. With Disney et al getting into the business, Netflix is losing, and is getting prepared to lose, large chunks of its content library, as those companies choose to keep their content on their platforms. For example, Netflix will lose most of its Disney content (which includes many of Disney’s properties from Marvel to Lucasfilm), and is also in danger of losing some TV staples that are among some of its most popular titles, such as Friends and The Office (here’s a good list). Even with all its Originals, 2/3 of the viewing hours on Netflix is licensed content. Another dynamic is that with Apple and other well-heeled players getting into the game, competition for top talent is becoming increasingly intense (this is a great time to be in the upper echelons of the content business!).

The burgeoning of streaming options is competing for viewers’ finite dollars and the explosion of content choices is competing for viewers’ finite time. This is forcing Netflix to dust off its strategic plan yet again. This time, however, it’s less of a dramatic pivot and more of an evolution that has two components. The first part is that Netflix has been steadily raising prices, in order to pay for both more original content and escalating rights fees for licensed content (sound familiar, haters of cable companies?). In the same way cable bills went up largely because of mushrooming fees for sports rights, consumers will pay for this downstream. For Netflix, revenue growth is slower than content expenditure growth (sound familiar, haters of cellular companies?).  But Netflix is clearly taking the long view.

The second component is that Netflix is both broadening and deepening its original content productions. Some detractors describe Netflix as having become the ‘Wal-Mart’ of content. But it’s more like a department store (or at least what department stores used to be), offering content for multiple ages, life stages, and preferences, from lowbrow to highbrow. Another key component to this is investing in more regional content. Although much of its original content library is available globally, Netflix is also creating a lot of content for audiences in particular geographies, that not everyone might see. The fact that Netflix is a global company will be a key strategic advantage, going forward.

Now usually, life’s not so good for companies that raise prices as they lose content. It is interesting that so far, Netflix is relatively unscathed. Its stock price is up 50% this year, and subscriber growth has been solid. Wall Street does not seem to be overly worried. A year ago, there were all sorts of articles popping up, about Apple, Amazon, Warner Media, and Disney being ‘Netflix Killers’. But the tone has changed. It now seems that each is looking more for its specific place in the universe: Disney as the inexpensive add-on that many people will buy, since, in the words of my eighteen-year-old, “owns most stuff”; Apple, with a more Amazon Prime-like model for video; AT&T, adding some gravy to its signature property HBO; and Comcast Universal, which has announced it will launch a streaming service but whose motivations are more to have some leverage vis a vis competitors and some offering for the cord-nevers.

I think Netflix should weather the storm just fine, at least in the short-to-medium term. Mainly because it has made the right moves to become the ‘must have’ channel in a typical consumer or household’s content lineup. And even as the content universe churns up, Netflix will still have the largest stockpile of its own and others’ content. And it has a superior user interface, is on everybody’s platform (including its competitors), and enjoys a sterling reputation among consumers. But it’s interesting to see how Netflix in 2020 is very different than Netflix in 2010, which was very different from the Netflix of 2000. If Netflix is already the stuff of B-School business cases, another chapter is waiting to be written.

Recode’s CodeCon News Show Tech Still in Denial

on June 12, 2019
Reading Time: 5 minutes

Recode’s Code Conference is taking place this week is an annual appointment of the who’s who of tech with Kara Swisher, Peter Kafka, and crew. This year’s lineup includes talks with YouTube CEO Susan Wojcicki, Facebook executives Adam Mosseri and Andrew Bosworth, Amazon Web Services CEO Andy Jassy, Fair Fight founder Stacey Abrams, Netflix vice president of original content Cindy Holland, Russian Doll star Natasha Lyonne, and Medium CEO Ev Williams, to name a few.

There is still one day to go, but so far, one trend seems to come through: much of tech is still in denial about the issues tech and society are facing.

The Relationship Between Tech Companies and Government Sure Is Complicated

The past few months have seen the relationship between government and tech giants become much more complicated. From the call to breaking up Amazon, Google and Facebook, to antitrust probes and calls for regulations on AI, facial recognition, and more.

At Recode’s Code Conference speakers touched on many of these topics but gave little reassurance that they grasp the urgency needed to address some of these issues.

Instagram’s Adam Mosseri said that while splitting up Facebook and Instagram might make his life easier it is a terrible idea because splitting up the companies would make it exponentially more difficult to keep people safe especially for Instagram. He went on to say that more people are working on integrity and safety issues at Facebook than anybody who works at Instagram. This is not the first time the argument that size matters has been made. It seems disingenuous, though, not to point out that size matters also as a negative point. It is, in fact, the size and the reach Facebook has that makes it such an important platform to target for bad actors from election manipulation to hate speech. One could argue that a smaller company, while more limited in resources, would also limit the appeal.

AWS CEO, Andy Jassy, said he’d like to see federal regulation on how facial recognition technology should and should not be used. His eagerness, however, was driven by a concern that otherwise we would see 50 different laws in 50 different states. He also stated: “I strongly believe that just because the technology could be misused, doesn’t mean we should ban it and condemn it.” Amazon, as well as Salesforce and Microsoft, all faced employees’ criticism on their involvement in providing technologies to ICE and the US Custom and Border Protection Agency. At Codecon, immigrant advocacy organization RAICES accused tech companies of supporting Trump’s administration no Tolerance stand on immigration by making their technologies available to the agencies. While tech providers have been working with government agencies for years, the higher level of intricacies between privacy and civil liberties on the one hand and government interest on the other are raising the scrutiny, especially under the current administration.

Facebook to Reveal New Portal Devices

Talking about being in denial. Andrew “Boz” Bosworth, Facebook’s vice president of AR/VR, told The Verge’s Casey Newton that the company has “lot more that we’re going to unveil later in this fall,” related to Portal, including “new form factors that we’re going to be shipping.” While no sales numbers were provided during the interview, Bosworth said that Portal’s sales were “very good.”

It is still unclear to me how Portal can be a long-term success for Facebook. The smart camera and smart sound that follow the subjects were probably the most significant appeal for Portal. Alexa built in added to the draw for those users who might have liked the technology but were not that keen at letting Facebook in their home.

I do wonder how long it will take both Amazon and Google to add similar technology to their screen-based devices and the impact that this will have on Facebook’s hardware. Both Amazon and Google scored better than Facebook did in our privacy and trust study signaling that consumers will have a higher propensity to let those brands in their home before they let Facebook in.

I am also not convinced that Facebook’s focus on human connection transfers from messenger to Portal. The kind of personal exchange that Portal is focusing on does not involve the same type of people we tend to engage with on Facebook Messenger. According to Similarweb.com in 2018, Facebook Messenger had the second largest audience following WhatsApp. While heavily skewed to North American Facebook Messenger counted 1.3 billion users worldwide. Messenger proved to be a very effective marketing channel, which means that many of those interactions are between consumers and a brand or consumers and a bot.

While I understand how Portal offers the opportunity to create a more meaningful connection with users, I feel that Facebook is underestimating how much more cautious and irrational people become when we talk about privacy in the home. Ultimately, I do not see Facebook being able to deliver a differentiated smart home, assistant, or video chat service that will drive consumers to invest in their ecosystem over that of Amazon or Google.

We Are Trying Our Best, We Are Very Sorry

At the end of day one, Kara Swisher was asked if there was a thread transpiring from the interviews and she answered everybody was saying: “We are trying our best, we are very sorry.”

It is hard to take the act of contrition on display as genuine when so much is at stake with tech at the moment. YouTube’s CEO Susan Wojcicki was asked by Ina Fried, the chief technology correspondent at Axios: “I’m curious, are you really sorry for anything to the LGBTQ+ community, or are you just sorry that they were offended?” the almost 3 minutes long string of words which started with “I am really personally very sorry” was a non-answer to a straightforward question. Wojcicki pointed to overall improvements to hate speech that will benefit the LGBTQ+ community but really did not explain any of the thought processes behind the decision.

Unfortunately, Wojcicki is not alone when it comes to the leadership of tech giants lacking accountability and transparency. Some commentators say these issues are not black and white, which is true, but that should not stop us from trying to resolve them.

It seems to me that most tech companies are not even willing to admit there is an issue, which will make it impossible to find a solution. Wojcicki, for instance, was not even ready to acknowledge that social media platforms contribute to radicalization. In a refreshing twist of events, Twitter seemed more in touch with reality as top legal counsel Vijaya Gadde said: “I think that there is content on Twitter and every [social media] platform that contributes to radicalization, no doubt.”

I am an optimist, and I like to see the good in tech. I certainly don’t want to fix something that is not broken, and I worry about government intervention because of the lack of understanding of the world we leave in and the need to put their political agenda before us. That said, with the changes that technologies such as AI, ML, and 5G are bringing, it is time for big tech to step up their accountability, transparency, and ethics game. Whether you believe Voltaire or Spider-Man said it, it is never more on point than today: “With great power comes great responsibility.”

AMD’s Gamble Now Paying Off

on June 11, 2019
Reading Time: 3 minutes

For a company that just a few years ago some people had essentially written off as dead, AMD has certainly come a long way. Not only are they one of the top ten best performing stocks of 2019 so far (after enjoying similarly high rankings for all of 2018), the company recently announced major new partnerships with the who’s who of big tech companies: Microsoft for the next generation Xbox One game console (Project Scarlett), Google for their Stadia cloud-based game streaming service, Sony for the PlayStation 5 game console, Apple for the new MacPro, and Samsung for GPU IP intended to power future generations of Galaxy smartphones and tablets.

On top of that, the company just launched its latest generation desktop CPUs, the Ryzen 3000 series at Computex two weeks ago, and yesterday at E3, debuted its newest Radeon GPU cards, codenamed Navi, that are based on a new GPU Architecture the company calls RDNA (short for Radeon DNA). The first commercially available products from the Navi effort are the 5700 line of desktop GPU cards, designed specifically for the gaming market. Also, in a nod to the importance of CPUs in gaming, the company added a new top-end addition to its 3rd generation Ryzen line: the 16-core, 32-thread capable Ryzen 9 3950X.

All told, it’s a broad and impressive range of offerings, and it’s tied together by a few critical decisions the company leaders made several years back. Specifically, AMD decided to aggressively go after leading-edge 7nm process architecture for both CPUs and GPUs and, importantly, chose to pursue a chiplet strategy. With chiplets, different components of a finished chip, made with different process technologies, could be tied together over a high-speed connection (AMD dubbed theirs Infinity Fabric) instead of trying to put everything together on one monolithic die. Together, these technology bets enabled the company to reach a point where it’s starting to do something that many thought was unthinkable: challenge Intel on CPU performance and challenge Nvidia on GPU performance. While final numbers and testing still need to be done before official winners are declared, it’s clear that AMD can now be considered in the elite tier of performance in the most important semiconductor markets, particularly for CPUs. In the GPU space, AMD chose not to compare its new 5700XT to Nvidia’s highest performance GeForce RTX 2080 and RTX 2080 Ti cards, but given the aggressive $449 pricing of the new AMD 5700XT, that certainly makes sense. (AMD is quick to point out that Apple claimed the new AMD Radeon Pro Vega II Duo powered multi-GPU cards in the Mac Pro are the fastest desktop GPUs in the world, but they’re really more of a workstation product.)

The momentum that AMD is currently enjoying is clearly due, in part, to those big technology bets, particularly around 7nm, as well as the fact that they are one of a few major semiconductor players with significant CPU and GPU assets. Again, many industry observers questioned that strategy for a long time, but now that the company is starting to leverage technologies from one side to the other and is really integrating its approach across what, admittedly, used to be two very distinct groups, the payoffs are starting to happen. In addition, the coordinated efforts are allowing them to do things like be the first company to integrate PCIe 4.0 across both CPUs and GPUs, as they’ve done with the latest Ryzen and Radeon products, as well as leveraging Infinity Fabric for both CPU-to-CPU connections (in the Ryzen line) and GPU-to-GPU connections (in the Pro Vega II inside the Mac Pro).

The company’s vision is now broader, however, as it’s started to reach into the server and datacenter market with its Epyc CPUs and Instinct GPUs, even launching what it claims will be the world’s fastest supercomputer in conjunction with the Oak Ridge National Laboratory. The overall Epyc and Instinct market share numbers are still small, and the cloud and datacenter markets are still generally very loyal to Intel and Nvidia, but the fact that AMD is back to being able to compete at all in the server market once again highlights the relevance of its core technology decisions. In addition, though it’s early, AMD’s newly announced partnership with Samsung could finally help the company make an impact on the mobile market—where they have been completely absent. With growing interest in cloud-based game streaming, we could even end up seeing AMD technology in the cloud talking to AMD technology in mobile devices, which is quite a stretch from where they’ve been.

In the end, it’s great for both consumers and businesses to see a truly rejuvenated AMD, because it inevitably forces all of its competitors to get better, which in turn, leads to better products from all the major players, as well as a more dynamic computing market. To be clear, AMD still needs to execute on the broad vision that it has laid out for itself—and unfortunately, execution issues have slowed the company in the past. Still, it’s encouraging to see some key strategies driving new opportunities, and it will be interesting to see what AMD is able to do with them as we head into the future.

Sick of Social Media Breaches? Ready to Pay for Privacy Help? Here’s What to Look for in Social Monitoring Products

on June 10, 2019
Reading Time: 3 minutes

Prior to the ironically privacy-focused F8, Facebook admitted that it “unintentionally uploaded” 1.5 million people’s email contacts without their consent. And earlier this spring, yet another Facebook data breach occurred: More than half a billion Facebook records were left exposed on Amazon’s cloud computer servers, open and available for public perusing – and for theft.

This probably doesn’t shock you – in fact, recent data indicates that it doesn’t. ID Experts® conducted a survey on consumer sentiments toward social media and privacy and found an interesting paradox. Although more than three-quarters of adults believe that their security is at risk on social media, that doesn’t prevent 63% from logging on to Facebook every day, 42% from browsing YouTube and 29% from checking Instagram.

At first glance, this certainly seems like strange behavior. We wouldn’t continue paying rent in a building that experienced constant break-in and theft; why do we continue using services that repeatedly fail to prevent data breach and data theft? And why does news that the service has experienced yet another breach leave us completely unfazed?

After considering the data at length, an obvious conclusion emerged – but one with radical implications: Social media users simply don’t know how to protect themselves. At an apartment complex, you’d know how to protect your space and would install stronger locks and a home security system to keep thieves from entering. But, for the most part, social media users aren’t acquainted with all the processes that allow thieves to access, share and abuse their data.

So – what can you do? Although the federal government certainly has a key role to play in protecting user privacy, and social media platforms must step up their game, consumers can’t continue to let their data be exploited while they wait for leaders to hammer out a legislative solution. They need a tool that will allow them to know when their data has been leaked and to protect them from the negative consequences of that leak.

We spent several months thinking about this, searching for ways to empower consumers to manage any threats to their online identity – everything from your profile and the content you upload to the content that is sent to you by others. After extensive conversations with consumers and a good deal of time in development, here’s what we think you should look for when hunting for a program to protect yourself:

Look for software that identifies impersonators. Celebrities aren’t the only ones whose social media profiles are duplicated; Facebook had to delete over half a billion fake accounts in the first three months of 2018. Find a product that scans through social media networks, hunting for any accounts that use the same name, nickname or profile picture that your account does and prompting you to report them to the social media network.

Look for software that stops doxxing. Doxxing – the leak of personal information online – can compromise not only your online identity but, in more extreme cases, your physical safety. The ideal product will notify you if you, yourself, or someone else shares personal information online and then allow you to remove the information so it doesn’t get into the hands of marketers, spammers or someone with more sinister intentions.

Look for software that cuts objectionable content. Most of us have had the experience of receiving information that we’d rather not know – inappropriate images, disturbing language or unwanted solicitations. To prevent this, seek out software that watches incoming and outgoing posts, looking for illicit activity, drugs- and violence-related content and screening your accounts so you don’t have to.

Look for software that fights phishing and malware. Although malware has historically been considered the main threat to our online safety, recent data from Microsoft reveals that phishing attacks have caught up. But some of these attacks as so cleverly disguised that its difficult to avoid clicking on it. If you’re considering some sort of social monitoring software, ask whether it will notify you about phishing and malware attempts.

It’s easy to be discouraged and even indifferent in a world where data breaches have become normal and identity theft a common problem. But innovative software designers are working to change this paradigm, empowering users to enjoy the benefits of these platforms, free from the fear of exploitation. Before the next breach hits the headlines, take the time to do some research on social safety products. You deserve to have just as much peace of mind online as you do in your home or office. The key thing is to look for the products and people that can provide it.

Podcast: Apple WWDC19

on June 8, 2019
Reading Time: 1 minute

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell analyzing the announcements from Apple’s Worldwide Developer Conference keynote, including the impact of iPadOS, the details of the new Mac Pro, and the significance of the Sign In with Apple feature.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Apple Shows Pro Content Creators Some Love with New Mac Pro

on June 7, 2019
Reading Time: 4 minutes

I attended Apple’s WWDC keynote this week and to say it was overstuffed with important announcements would be an understatement. From key updates to all of its operating systems (including the launch of iPad OS), to new developer tools such as Swift UI and ARKit 3, to new privacy-focused rollouts including Sign In with Apple, the vibe was one of a company firing on all cylinders with a real sense of confidence and even a bit of swagger. Nowhere was this more evident than in the long-awaited and symbolically important announcement of the new Mac Pro.

A Long Overdue Release
Apple’s Mac Pro has long been a favorite of professional content creators, especially those working in the fields of video editing and computer-generated imagery (CGI). However, the last major Mac Pro launch from Apple happened back in 2013, when the company rolled out the current cylindrical version of the product with a starting price of $4,000. A bold design unveiled at WWDC that year; Apple filled the product with unique technology designed to prove the company was still head of the class when it came to innovation. Unfortunately, Apple made some technology bets inside that design that failed to come to fruition, and that put it in a difficult position when it came time to refresh the product. And so, instead of major refreshes that would keep it relevant, the product saw minor speed bumps that saw it fall further behind the competition. The product languished for years, leaving many with the impression that Apple had abandoned some of its most ardent users behind.

Things got so bad that Apple took the unusual step of sitting down with a small group of technology journalists way back in April 2017 to announce a “completely rethought” version of the Mac Pro was in the works. Apple said it would ship…sometime in the future. More than two years later, Apple has finally announced the new Mac Pro, and a new high-end monitor called the Pro Display XDR both of which will ship this fall.

A True Mac Powerhouse
Apple executives left out any cheeky comments about the company’s ability to innovate and let the new Mac Pro, which starts at $5,999, speak for itself. The design returns to a more familiar tower form factor, but a highly modular one focused on accessibility, upgradeability, and—importantly—air flow. Cooling is key here as the system can support workstation-class Intel Xeon processors with up to 28 cores, up to 1.5TB of memory (via 12 DIMM slots), graphics including the 64-GB Radeon Pro Vega Duo (with two GPU chips), and a 1400W power supply.

Beyond its top-shelf industry parts, the new Mac Pro also includes a custom-designed accelerator card Apple Afterburner that ramps up the performance for video editors. Afterburner has a programmable ASIC designed to speed the conversion of native file formats and capable of decoding up to 6.3 pixels per second. Apple says the card can decode up to three streams of 8K ProRes RAW video and 12 streams of 4K ProRes RAW video in real time.

The system has a steel frame and encased in an aluminum housing that lifts complete away to give full access to the system. The aluminum housing features unique air-flow channels on the front and back that give the unit a bit of a “super cheese grater” look. Apple carries that look over to the rear of its Pro Display XR, and it’s not merely a design choice as the monitor itself has serious cooling requirements.

That’s because inside the 32-inch, 6K display Apple is using a large array of LEDs to drive an astounding 1,000 nits of full-screen brightness and 1,6000 nits of peak brightness. The display supports a P3 wide color gamut and 10-bit color for over 1 billion colors. This is a true professional-caliber monitor that Apple says can compete with other industry products that can cost upwards of $40K. The base model of the display starts at $4999, one with a special matte option will cost $5,999, each without a stand.

Apple spent a fair amount of time talking up the monitor’s new adjustable stand, but when execs unveiled that the stand would cost $999, there was an audible negative reaction among the WWDC crowd. This, to me, was among the only Apple missteps the entire keynote, and it’s really one more of perception than anything else. Apple knows that many professional content creators already have a high-dollar stand, and so the company is wisely offering the display sans stand. I’m certain that if Apple has said the display started at $5999, or you could buy it without the stand for less, nobody would have batted an eye. That said, I do find it absurd that to utilize the display with an industry standard VESA mount Apple force the purchase of a $199 VESA adapter.

Setting the Future Stage
A Mac Pro that starts at $5,999, with a display that starts at $4,999 (minus a stand), is clearly not a product for the average consumer. And that’s the point. With these new products, Apple is showing professional content creators some serious love. Back in 2017 when Apple announced plans for new Mac Pro, many of us saw that as a good sign, but as time wore on, it became concerning that it was taking so long. How hard, we wondered, is it to build a tower workstation? Apple rewarded that long wait with a true purpose-built system that should deliver world-class performance. Plus, Apple has created a design here that should allow for the type and cadence of hardware refreshes required by this segment of the market.

The other important thing Apple accomplishes with new Mac Pro is to establish a clear distinction between this product and the rest of its Mac lineup. Why is this important? Because most of us expect Apple to shift the rest of its Mac lineup over to its own A-Series, ARM-based processors at some point in the future. When that happens, and Apple talks up all the benefits of this switch, it was conceivable that many would point back to this 2019 launch and suggest that, once again, Apple had launched a Mac Pro that was out of step. However, this design—and especially the inclusion of the Apple Afterburner accelerator technology—firmly establish that no matter what comes next from Apple, the pro-centered end of its lineup will continue to offer a high-powered X86 option. For pro content creators, this is a very good thing.

Amazon’s Inspiration

on June 6, 2019
Reading Time: 4 minutes

For the past few days, I’ve been attending a conference called ReMars. One can say it is organized by Amazon, but what makes this conference different from the many I attend every year is that it is not all about Amazon. Jeff Bezos and his team have been organizing a conference called MARS for years now. It’s helpful to know what MARS stands for, which is Machine Learning, Automation, Robotics, and Space. These are essentially the core categories you can expect to learn something about by attending MARS. MARS has been for Amazon a source of inspiration and a hope to inspire others by having guests, who are generally acclaimed as the top scientists, academics, or entrepreneurs in their fields, to share ideas and some of the ground breaking work they are doing. MARS has always been only open to a very select group of people, but this year, Amazon did a very Amazony thing and scaled the conference.

While it was still invite only, the group expanded to several hundred but still had a goal of being intimate, communal, and bringing the best and brightest minds together to share ideas, challenges, and inspire one another. As I said, what made this event so different is how it was organized by Amazon but not about Amazon. The themes were still focused on Machine Learning, Automation, Robotics, and Space, and the best analogy I can make is ReMars is somewhat like TED but focused just on the MARS themes.

Even though this conference is not about Amazon, the same way that other company events I attend are all about their products and their announcements, Amazon did fit in their own learnings about Machine Learning, Automation, and Robotics and gave key executives some air time to share what they learned and what excites them about the problems they are setting out to solve. And yes a few announcements snuck in like Alexa getting a much more conversational interface and the official reveal of the Prime Air Drone and a go to market for Prime Air delivery to customers. in a few months. The rest of the talks taught us about how machine learning is being used in biochemistry to help solve health problems in bioscience. We learned about how far robotics has come and what major breakthroughs have led Boston Robotics to have drones that can walk and jump and run and scale objects very much like a human. We also learned the behavioral science around humans interacting with robots and the ways humans treat robots tell us very much about our humanity at its core.

I’ve appreciated the thought provoking sessions and wanted to share a few highlight takeaways.

  • Humans and Robots living and working together. Until you experience how many robots Amazon has built and has running in their warehouses, you can’t appreciate what Amazon executives call a symphony of humans and robots working together. On the show floor, and at the conference, Amazon displayed the many robots they have designed and built to help them automate their warehouse work as much as possible. We heard a story about a factory in Japan where human workers show up to work and do some stretching and body warm up exercises and the robots they work with are programmed to do the same exercises. This has a single goal in mind, which is to help the workers feel more at peace and connected to their robot working companions. Since much of this particular collaboration between humans and robots is in warehouses and not in public, we don’t see this dynamic, but we are rapidly approaching this idea of humans and robots in an active community. So the question came up as to how we humans should think about these robots? Are they peers and colleagues, or essentially on the same level of humans or something else? MIT Researcher Kate Darling offered a profound observation and way we could think about robots. Through the years, she explained, humans have lived communally with animals in a working relationship as well as a companion relationship. So perhaps it is best if we perceive robots in a similar way we perceive animals. Fascinating, and worth a good think.
  • AI May Really Help Solve Some of the World’s Greatest Problems. Yes, we hear this line, and it feels cliche at this point, but many of the worlds top minds in the field of AI truly believe this. We heard examples of how machines trained on computer vision to detect tumors were doing better jobs of predicting anomalies and specific treatments than expert physicians. Or how these machines could predict with greater accuracy the severity of an injury. We learned how the cost of manufacturing a new drug for an illness has gone from $100m to over $2.5 due to many more failures in the trial and error process to end up with a winning compound. AI seems well positioned to run simulations of these compounds and help bioscientists narrow the field of potential compounds and then test to see their effect. I’ve believed AI will be one of the, if not the most, transformational technology many of us will ever witness in our lifetime, and I believe it even more now.
  • One Observation about Amazon. Yes, this was not a conference about Amazon, but there is an interesting Amazon observation to be made. In the keynotes, Amazon employees gave us we heard about Amazon’s robotics strategy and what they learned, solving challenges in automation with robotics. We learned about how Amazon’s AI models are helping to make shopping on Amazon, or using Amazon services more relevant and personal and provide a better customer experience. We learned about how Amazon created their Just Walk Out retail technology showcased at Amazon Go stores and more. And what hit me, was that while Amazon wasn’t here to pitch AWS to the world, Amazon, as a company is the first best customer of AWS. With this viewpoint, Amazon has then built AWS on the back of the learnings of a company as good as scaling technology as anyone and across many industry disciplines. These learnings, and the solutions their learnings led to invent, put them in a position to then offer these solutions to hard problems as a part of AWS to other customers. AWS was built out of Amazon needing a solution to solve their problems and then became a platform to help other people solve theirs. Now those tools included in AWS include deep expertise in machine learning, computer vision, automation, and more. I’ve long felt competing with Amazon in key areas like retail and commerce will be very difficult, and I believe that even more now.

At ReMARS, Amazon hopes that invitees are inspired by the work being done by the speakers at the sessions but also that it inspires them as well to keep charging forward and inventing the future.

Apple WWDC: Two Non-Announcements that Made News

on June 5, 2019
Reading Time: 5 minutes

Since Apple’s WWDC keynote on Monday, it has been fascinating to see how people reacted to two things in particular: the death of iTunes and mouse support for iPad. In a way these were non-announcements as neither of them was announced on stage and I put them together because in my mind these two are deeply rooted in legacy workflows and it seems that people’s feelings about both are quite polarized. On iTunes there is a camp feeling nostalgic and sad to see it go and one that wished its death was called much earlier. On mouse support for iPad there is a camp rejoicing for it as it brings the iPad closer to deliver a PC experience and there is one that sees this addition as a step back in allowing for new touch-first workflows. So which camp am I in?

iTunes Can Now Rest In Peace

The iTunes brands sure meant a lot for my generation. iTunes, and of course iPod, were the door into digital music so I can understand why so much of the WWDC press coverage focused on this. It is the end of an era, and the steps Apple took to transition the iTunes functionality reflect precisely where we are with content consumption.

iTunes has felt tired for a long time, and it also felt like it was trying to be too many things at once. Apple itself made fun of this second point on stage, announcing that they were adding calendar and email support to it before unveiling what they were really doing: splitting its functionality. For content, we now have three distinct apps: Apple Music, Apple TV, and Podcasts. These are the same apps we use on iOS, so it just makes sense to have consistency with the Mac, especially given the efforts around Project Catalyst, aka Marzipan. The change also reflects that music is no longer the only digital content people consume regularly. Personal content users had is not suffering from this change either, and it will be automatically transitioned in the apps that match the content type, so your music library will be in Apple Music and your movies in Apple TV.

iTunes was also how consumers synced their content and performed, but even this has changed for many consumers. If you have embraced iCloud, you have had little need for iTunes already. But if you have not transitioned to the cloud, you can still perform these tasks via Finder, which, if you think about it, is a much more logical place for this task.

Digital content has moved on, and so have we. I am thankful for the service iTunes provided, but I am glad to let it go.

To Mouse or not to Mouse, this is the Question!

This non-announcement is a little more complicated. At no point on stage, Apple referred to mouse support as a feature for iPad. Instead, Apple talked about an enhanced touch experience that would improve editing on iPad something that many users had been asking for. As someone who uses an iPad Pro as my primary computing device when I travel I can attest to how painful editing text can be. Apple also announced Desktop-browsing support for Safari, which basically means that while in iOS sites were defaulting to a mobile version and users could force the desktop version now iPadOS will be set up in the opposite way. This is a step that will improve workflows quite significantly for users.

So how do we get to mouse support? A developer noticed an accessibility feature in iOS13 that delivers Assistive Touch and the mouse target that replicates a finger touch and can navigate using Bluetooth and USB mice. This feature was already available in previous generations but has been optimized. iPad also got the newly introduced USB support for external storage, so I do wonder if the USB mouse support is part of the same release.

Of course, you can be a skeptic and say that Apple buried the feature not to admit that their stand on iPad and mouse has changed. When I look at some of the videos people, have posted on how this new mouse feature works and, more interestingly, when I read comments from regular users I can’t help but think that at least this version of mouse support is really what it says on the label: an accessibility feature. I say that because it is clearly not designed to replicate the traditional use of a mouse. Users will try and use it in that capacity, but I would guess the experience will be subpar to what it could be if Apple really decided to give iPad a mouse. It is not the first time that Apple changes its mind and markets the U-turn in such a way that you think it was always planned that way. I just don’t believe this is the case in this occasion.

From the short demo I had of the new gestures, it seemed to me that a lot of my pain points were addressed but I was curious to hear what people who downloaded the developer beta thought of it in comparison to this accessibility feature. One comment was particularly interesting to me:

Owen talks about the gestures being useful if you are typing on glass while the accessibility feature makes a difference when using a keyboard with your iPad. I find this interesting because it seems to align with Apple’s position on touch on the Mac. Apple has always said they do not believe in vertical touch. If your hands are on the keyboard, it is unnatural to reach out and touch the screen. I used to share that conviction when my primary devices were Macs and larger PCs, but I have come to use touch and keyboard a lot with my iPad because that is what I learned to do on my Surface. The reason why this is more natural to me than with a larger PCs is simply that both Surface Pro and iPad tend to be much closer to me than a regular PC making lifting my hand to touch the screen much more comfortable. As a matter of fact, I do not use the touchpad on the Surface Keyboard as much as I use it on a traditional notebook.

We are all a little different, and our workflows are all unique, even when we use the same apps. For me, what it really should boil down to, and I said this before, is whether the device fits your workflow. I admitted to my editing and browser pain with iPad Pro, pain that I endure because the return I get from being able to do everything I want with one device is enough of a driver for me. Am I happy that Apple is addressing my pain points with improved touch? Absolutely! Do I want a mouse? No, I don’t because if it were such a crucial part of my workflow, and the same goes for the keyboard, I would carry a Mac.

If you still think you cannot do real work on an iPad because of the lack of mouse support, I don’t think the iPad fits your workflows, and that’s ok. If you want to try and use the iPad and feel that you are compromising on user experience in such a way that the pain is more than the reward, then the iPad is also not for you, and that’s ok too. With Project Catalyst, it could be that some users for whom keyboard and mouse are essential might find that iOS-like apps on a Mac bring them closer to an iPad experience without compromising on their core needs. This is the beauty of being able to choose a tool that best fits your workflow, not the other way around.

 

 

 

 

 

 

 

Apple Blurs Lines Across Devices

on June 4, 2019
Reading Time: 4 minutes

Is the iPad really a computer? Or is it a computing accessory?

That’s a question that’s triggered enormous amounts of debate and discussion for many years now, and regardless of where you stand on the issue, it’s one that has never really been definitively answered. At yesterday’s WWDC, however, Apple certainly took some big steps towards an affirmative answer with the launch of iPadOS and all the latest enhancements it entails.

Most importantly, the introduction of a true file system and the Files app to access it puts the iPad in a similar class to other “full” computers. Though Steve Jobs may be rolling in his grave because of it (he was notoriously averse to anything like a visible file system for the iPad), it’s something that the iPad has desperately needed for those who want to run computer-esque productivity apps and do enterprise-like work on the device. It turns out storing, finding, and organizing files on local storage—and having easy access to external storage (like USB sticks!)—is an absolutely essential part of those types of efforts. Without it, the iPad was severely handicapped; with it, it’s time to give a fresh look to the concept of tablet computing.

In addition, though Apple didn’t talk about it, the forthcoming version of iPadOS (expected this fall along with iOS13, tvOS13, WatchOS 6 and MacOS Catalina) also includes support for Bluetooth mice. The feature is currently hidden in some accessibility settings, but it’s hard to imagine Apple keeping it there for too long, as the lack of mouse and true cursor support has been a lingering concern around iPad computing for some time as well. Now that the secret’s out, serious iPad users will be clamoring for it.

But Apple’s blurring of device lines wasn’t limited to iPads becoming more computer-like. There were also several introductions that highlighted how the iPad can become a more useful computer accessory. Most notable of these was the debut of the new Sidecar feature in MacOS Catalina that will let you use an iPad as a secondary monitor for your Mac. While there are certainly cheaper options for dedicated monitors, the ability to let you use your iPad as a secondary display on an occasional (or even regular) basis is something that many Mac users will undoubtedly find very useful. In an age of increased multitasking, there’s never enough screen real estate, so options to extend your desktop and apps across multiple screens make a great deal of sense.

Interestingly, because Sidecar also supports Apple Pencil on the connected iPad, it’s almost like bringing some level of touch-screen support to the Mac. To be clear, it only works with Mac apps that currently support stylus input (think graphics apps), but it can add a Touch Bar, even to Macs that currently don’t have them, and will likely lead to other touch-enabled features.

Another critical iPad to Mac benefit is that the company’s new Project Catalyst, which allows developers to easily move some of the 1 million apps designed for iPads over to the Mac. Apple said it used the technology to move some of its own apps, such as Apple News, Stocks, Home, etc., over to MacOS, and with Project Catalyst, they’re opening up the same capability to developers who use the company’s Xcode development tools. Given the relative dearth of new Mac applications, this is a critical step for the ongoing life of the Mac platform.

What’s interesting about all these developments is that Apple is taking a new approach to its various product categories that seems less concerned with potential overlap and more concerned with providing the best advances possible for each. In other words, in the past, Apple appeared to be very conscious of the potential confusion that could be created in understanding what an iPad could do (and how it could be done) versus what a Mac could do. Hence, there was much more separation between the iPad and the Mac in terms of capability and functionality.

As device usage trends, product category sales trends, the impact of the cloud, and several other realities of the modern digital world have evolved, however, Apple seems to be much less worried about defining the categories (and limits) of each of its devices. Instead, these new announcements suggest that they want to leverage whatever resources they can to make the iPad experience as good as it can be and the Mac experience as good as it can be. This new approach towards the realities of our multi-device world may create some confusion among some people about what device to buy, or which one to use for certain applications or in certain situations. In the long run, however, it seems to be a much healthier perspective that allows people to get the most out of whatever individual devices, or combination of devices, they happen to have access to.

From an overall perspective, these developments are particularly important for the Mac, which has certainly seemed to be Apple’s abandoned stepchild for quite some time now. In conjunction with the impressive-looking new Intel and AMD-powered Mac Pro also introduced at WWDC, however, it’s clear that Apple is providing some much-needed love to its first platform device.

If Apple really wants to get serious about letting people use their products and services across the reality of today’s complex multi-device world, they’re going to have to do a lot more work in getting some of their devices (like Apple Watch), applications, and services to work across other non-Apple platforms (as I’m sure they’ll eventually do). In the meantime, however, these new announcements show that Apple is becoming the kind of company that’s perfectly comfortable with embracing the uncertainty and blurriness of today’s digital product categories.

Major Changes Coming to the Cable-Wireless Relationship

on May 30, 2019
Reading Time: 4 minutes

Cable’s third, but still nascent foray into the wireless business will undergo major change over the next couple of years, driven by M&A, the rollout of 5G, and new competition in broadband from fixed wireless. Three ‘events’ are cause to revisit cable’s prospects in mobile sector: the announcement, on May 20, that New T-Mobile would continue the MVNO relationship that Sprint has with Altice (if the Sprint deal goes through), plus extend ‘fair terms’ for 5G; rumors about Altice launching its MVNO service at significantly discounted prices; and the impending rollout of 5G, and, over time, fixed wireless services by Verizon and, potentially, New T-Mobile.

Question #1 is whether the cable MVNO effort has been successful to date. In a nutshell, ‘sorta’. Xfinity Mobile, which launched about two years ago, claims 1.4 million subscribers, which is about 5% of their base of broadband customers. Spectrum Mobile (Charter’s offering) launched nearly a year ago and has 340,000 subscribers, which is about 1.5% of its broadband base. Those are real numbers, but they aren’t moving the revenue needle at their parent companies, nor has  cable taken measurable share from wireless. Those signing up for cable MVNO services are trending toward the price-conscious, in that the majority of subs are choosing the ‘pay per GB’ plan.

So, the three largest cable companies are in wireless, but they’re not all in. None has invested in deploying their own infrastructure, nor have they acquired spectrum in recent auctions. I don’t think they’re in wireless because they see huge potential profits in that business (Verizon’s wholesale terms make that very tough). They’re in it because they need to keep a toe in, given larger industry dynamics, and because of some modest retention benefits to their broadband base.

But there are three major developments on the horizon that will force a change in cable’s wireless strategy. The first change depends on what happens with the T-Mobile-Sprint deal. If it goes through, as I still believe it will, decent terms will have to be extended to Altice, including 5G. This will be part of any concessions offered. So, Altice will be able to offer a price-competitive wireless service, bolstered by its growing network of Wi-Fi hotspots. Altice’s infrastructure will prove to be an even more critical asset to New T-Mobile, as they race to build out 5G services leveraging both the 600 MHz spectrum and Sprint’s 2.5 GHz network. New T-Mobile will be a major player in 5G, and has promised it would offer residential broadband service to 50% of homes at prices below today’s typical broadband.

Second, the 5G rollout that will occur steadily over the next couple of years will alter the equation for cable. It is not clear whether the Verizon MVNO contract with Comcast and Charter includes 5G. If, as we believe, it does not, their mobile offerings would start to be at a disadvantage, especially once a compelling array of 5G phones (such as a 2020 iPhone) becomes available and as 5G coverage hits some critical mass.

The third potential game-changer is fixed wireless. As Verizon rolls out fixed wireless to more cities beginning later this year, it will start competing more directly with cable companies in the broadband business. This dynamic does not auger well for the MVNO relationship, considering especially the fact that a major motivation for cable’s wireless initiative is to boost retention of its broadband customers. It gets sticky for New T-Mobile over time, as well. Yes, they must extend fair MVNO terms to Altice for the foreseeable future, yet their planed home broadband might be competing directly with Altice in a handful of markets.

The implications are that cable will be forced to revisit its wireless strategy in the not too distant future. If they want to get to the next level of growth, cable will have to reduce its dependency on a purely wholesale relationship for their mobile offering. Fortunately, some viable options are presenting themselves, and at just the right time. Rather than a choice of one option or another, it’s more of a ‘cocktail’, consisting of the following:

  • CBRS. This is the 3.5 GHz shared spectrum service that will be launching later this year. This could be a lower-cost, lower-risk way for cable to reduce its dependency on wholesale arrangements and complement its Wi-Fi offerings. MulteFire is another option on the menu, but is more of a wildcard.
  • Mid-Band Spectrum. If cable companies were to ever bid on spectrum, the 3.7-4.2 band that the FCC will likely auction is the best ‘fit’ for cable. It would also fit well with any planned CBRS initiatives
  • Wi-Fi and small cells. These worlds are converging (see my recent Wi-Fi roadmap piece). Wi-Fi 6 (802.11ax) improves Wi-Fi speed, range, and reduces interference. It also helps address the ‘Wi-Fi Purgatory’ issue, which in my view has been a major damper on the Cable Wi-Fi experience. One could also see cable companies complementing their Wi-Fi networks with strategic deployments of small cells, as Charter has indicated it might do.
  • DISH. As always, DISH remains a wildcard. If it ends up deploying some form of wholesale 4G/5G network, that could be a game changer as far as MVNO relationships are concerned. But as with most things in the cable/telco/mobile/internet landscape, it’s complicated, since cable remains DISH’s principal competitor on the pay TV side, which could certainly affect its appetite to host cable companies as wholesale customers.

The current state of affairs in mobile and broadband is in a sort of equilibrium that will not last beyond 2019, as cable dipping its toes further into cellular, and cellular starts to dip its toes into broadband (cable). Over time, we all realize that fixed and mobile networks will converge, and that the customer, circa 2022-2025, might well not have separate fixed and mobile subscriptions (they’ll be having to spend that extra money on their 20-odd streaming TV services).

But for wireless to be anything more than a rounding error in the cable companies’ business, they’ll have to make a more substantial physical investment than they’ve been willing to undertake to date.

PCs Are Changing Their Spots but not Learning New Tricks

on May 29, 2019
Reading Time: 4 minutes

I know, I mushed up two sayings trying to convey what I think is the biggest issue with current PCs: we still do the same things we used to do ten years ago. We might do them faster, and everywhere we want rather than slow and chained to our desk, but the way we do them has not really changed.

This is Computex week, so we have seen a long list of announcements coming out of the show in Taipei over the weekend all focused on PCs. We heard about new silicon platforms from AMD, Intel, Nvidia, original designs that sport dual screens like the ZenBook Pro Duo by Asus and new materials like the new wood-paneled Envi from HP as well as integrated 5G connectivity with Qualcomm and Lenovo’s Project Limitless. All very exciting developments for a market that has seen some new life injected into it. No matter which numbers you look at, PC sales have stabilized, and ASPs are growing.

Looking Beyond Hardware: Software First…

Nobody though, is talking about software yet. Even two weeks ago, when Lenovo introduced the prototype of the foldable Thinkpad, there was no talk about software. Software is, at the end of the day, what will empower users to take advantage of all these new designs and capabilities. Giving me the options to have a foldable screen or even two screens but not changing the way the underlying software and apps work will do very little to make me feel my investment –  which I am sure for some of these devices will not be insignificant – is worthwhile.

The burden of the platform is on Microsoft, and if we want to broaden the conversation to computing in general, on Google and Apple. And it has been fascinating to me how different platforms have dealt with apps thus far. In different ways, but I feel that both Microsoft and Google never really addressed the app ecosystem problem head-on. Microsoft focused on delivering great first-party apps but did not seem to put the same effort into engaging with developers to deliver exceptional experiences beyond gaming. Google also concentrated on great first party apps and worked on a cross-platform solution to make it easier for Chromebooks to leverage apps designed for Android which more often than not leads to just ok experiences but does not take apps to their full potential. Apple is in the process to give developers the option to leverage the work they put into iOS for MacOS, given that ecosystem never grew as far and wide.

Look back at your usage of both phones and PCs over the past ten years and see how much what we do with our phones has changed compared to what we do with our PCs. We might be taking our PCs with us everywhere and even have them connected, we might have added a little touch and pen support, but what we do with them has not changed much at all.

I strongly believe that for new form factors such as foldable and dual screens both OS and apps need to be redesigned from the ground up with the intent to make our workflows richer or easier.

… Intelligence Second

From a platform perspective, I expect Microsoft, Google, and Apple to deliver more all around intelligence too. While some brands have been talking about smart PCs, the focus so far has been on smart hardware rather than an intelligent experience at a workflow level. So for instance, a PC might change its security setting based on the WiFi that you connect it to or the privacy display might turn on when the camera detects a person next to you.

Intelligence in smartphones is more and more focusing on delivering a very personal and tailored experience across all the apps that we use, and I would love to see the same applied to computing.

The ability to use AI to understand the way I use my PC through the use of Microsoft Graph or Google Assistant and have my computer present apps in such a way as to facilitate my workflow and optimize the use of computing form factor and power could be a total game changer. Think about the routines you have set up with Alexa for your connected home and then think about how powerful it would be to set up routines on your PC based on usage. From simple things like pairing the apps you use for a specific task and presenting you with a screen that has them all ready for you as you start your task. Or delivering you the information you need before you look for it, like your calendar info when trying to schedule a meeting or the translation of a passage when reading something that has a foreign text in it. Do you think this is impossible? If you consider what Microsoft Office can deliver today with the smart editor in Word and design suggestions in PowerPoint or smart replies in Gmail you see we are well on our way. I just want a systemwide approach so that intelligence can really break free.

Demanding More

As much as our smartphones have become our first go to for many of our computing needs, most of us are still pretty consistent in turning to PCs for work. We might complain they are not as sexy as our phones and lag in some functionalities that we so much love in our phones like instant-on and battery life, but overall I think we have become quite resigned to accept things the way they are.

Microsoft and Google, and to a lesser extent, Apple, have added intelligence to their apps and services but that intelligence is not yet permeating from those first-party apps like Office and G Suite to cross apps experiences. Partly this can be explained by the fact that some of those services drive revenue and therefore intelligence is used as a differentiator, but partly I think today’s limitations are driven by the fact that AI is considered as differentiation in enterprise but not in the consumer space. So an enterprise that has access to the Microsoft Graph can deliver an intelligent workflow, assuming they care about user experience, but as a consumer, I am just not given the same level of access. This makes very little sense to me given how much more engagement platforms would be driving by opening up their AI capabilities to developers in a similar way they do with APIs. I bet consumers would even pay for that as the return would be evident to them and I do wonder if Google’s learnings on Android will result in a much more intelligent solution on Chromebooks.

Unless Microsoft, Google, and Apple recognize that PCs need to catch up with smartphones in the overall experience, they deliver and not just the features they offer, consumers will continue to see smartphones as a superior computing platform even with the physical limitation of their current form factors. Failing to address these shortcomings in a timely fashion leaves the PC market open to more disruption coming from AR and VR.

Podcast: Qualcomm FTC Ruling, Huawei Woes

on May 25, 2019
Reading Time: 1 minute

This week’s Tech.pinions podcast features Ben Bajarin and Carolina Milanesi analyzing the ruling in the court battle between Qualcomm and the US FTC, and discussing the latest challenges facing Huawei and their implications for the future.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Shipments and Market Share Matter; Even if Companies Say Otherwise

on May 24, 2019
Reading Time: 4 minutes

Apple recently stopped reporting quarterly hardware volumes in its earnings calls. Amazon has, famously, never reported its hardware numbers. Nor has Microsoft (for Surface). In fact, many companies don’t publicly state their hardware shipments, and more than a few suggest the unit number is less important than the revenue number. Obviously, revenue is hugely important, but the world still wants to know: How many did you ship?
Unit volumes are important because they help drive an industry-wide scorecard. We sum them up, and it tells us if the market is growing, flat, or declining. And it gives us important information about the status of the players inside that market and their relative position against the competition. Companies use the numbers to plan their businesses, their marketing, and even their employee bonuses.

Market research companies capture shipment volumes through different methods. At IDC, we use a very resource-intensive one that involves dozens of people across the world. It’s not a perfect system, and we occasionally make mistakes (when we do, we work to correct them). There’s been a fair amount of chatter about our numbers lately, and I thought it might be instructive to talk about our process.

Top-Down Process
IDC tracks new product shipments into the channel. Most of IDC’s tracker products publish quarterly, but the process of collection is a year-round job that we approach from the top down and the bottom up. Let’s start with the top down. Each quarter IDC reaches out to the companies we cover, and we ask for worldwide/regional/country guidance. Our worldwide team collects these numbers and distributes them to the dozens of regional and country analysts around the world. A remarkably large number of companies participate in this process, as they see the value in a third party collecting and disseminating these numbers. We look at these numbers as the starting point, not the finished product. As they say: Trust, but verify.

The process we use to verify is also the one we use to capture shipments for vendors that don’t guide us or report their numbers through earnings calls. This is a multi-pronged approach that includes our world-class supply-side team, our worldwide tracker team, and communication with IDC’s various analysts tracking component shipments.

IDC’s supply-side team resides in Taiwan, but they spend a great deal of time in China. They are in constant contact with component vendors and ODMs that are building the devices for the major vendors. Their relationships here have taken years to build and require frequent face-to-face meetings. The top-line numbers they collect, which include details such as which ODMs are building for which OEMs, deliver a critical fact-checking data point for our trackers, and help us move closers to a market total that includes smaller players (Others) that we don’t track individually.
Meanwhile, the worldwide tracker team is acquiring numerous import/export records from countries around the world. These files are expensive, big, and messy, and our team spends weeks cleaning them to get at their valuable data, which can include details such as SKU-level data and even carrier-destination for smartphones. This data is then passed along to the local analysts.

Finally, IDC’s various component-tracking analysts are collecting their information about processors, storage, memory, and more. These inputs—which obviously lag shipments of finished products—represent a third top-down data point that we use to triangulate on an industry total.

Bottom-Up Process
While the top-down processes are in motion, our regional- and country-level analysts are conducting a bottoms-up approach. One of the key steps is to reach out to the regional contacts of the vendors to ask for guidance. These calls help both IDC and the vendors track down possible internal errors in shipment distribution.

In parallel, dozens of local analysts are also accessing localized distribution data. Access to this data varies widely by country. In some places it’s a deep a well of important information, in other places it’s very basic, and in some places, it’s simply not available.

Concurrently, the local analysts are having ongoing discussions with the channel. Like distribution data, the level of inputs here can vary widely. In some places, channel relationships drive a great deal of very detailed information. In other places, the channel plays it close to the vest, and the analyst is forced to do more basic checks. In the end, the channel piece is an important part of the overall process.

Bringing It All Together
The various top-down and bottom-up processes culminate with a mad dash to input data, cross-check that data across inputs, fix mistakes, make new ones, fix those, and then QS the finished product. All to publish, typically, about eight weeks after the end of the quarter. Two weeks later, the same teams update their market forecasts. Another monumental effort, driven by a whole different set of processes.

Is the process perfect? Far from it. Do we make mistakes? Yes, but we try to acknowledge them and correct them. Different firms use different methods, but we feel ours is a good one. Sometimes that means we diverge from the pack in terms of a company’s shipments in a given quarter. If you see us doing so, it’s because we feel our process—and the information we’ve collected—has led us to a different conclusion. I should note that this process is becoming increasingly important as the secondary market for products such as high-end smartphones heats up, and a few companies drive real revenue through the sales of refurbished phones. IDC attempts to track these units in our installed base, but we work to keep secondary phone shipments out of our shipment numbers.

If a company says revenues or margin matter more than shipments, that’s not an unreasonable position to take. Especially in a slowing or declining market. However, you can bet that behind the scenes that company is still closely looking at shipment volumes and market share. In the end, markets need shipment data to track the health of their industry and the relative position of the players inside of it.

Make Digital Transformation about Your Business, not about Millennials

on May 23, 2019
Reading Time: 4 minutes

Millennials might be where your digital transformation journey starts, and their imminent control of the workforce might even put some pressure on your timing. Ultimately though, digital transformation should come from a more profound desire to look at your business processes and make them better, more efficient, more user-friendly.

Every presentation I see about digital transformation talks about talent shortage, which drives a highly competitive employment market and a stronger need to retain talent when you find it.

But what about the current workforce? A recent Gallup study showed that 85% of employees are not engaged or are actively disengaged at work. If you are interested in knowing how much that costs in lost productivity, Gallup estimates a whopping $7 trillion! Eighteen percent of the employees are actively disengaged in their work and workplace, while 67% are “not engaged.” This means that the majority of the current workforce is indifferent to the organization they work in.

While the Gallup report goes on to talk about how performance review and better management can help change this, I would argue that digital transformation could alleviate if not eradicate such apathy at work.

Engagement Makes for Successful Consumer Brands….

Millennials are not the only employees who care about their job. They’re not the only employees who want to collaborate, feel rewarded for the work they do, and expect to have the right tool for the job. Gen Xers want many of the same things. And I will guess baby boomers did too.

The big difference between millennials and Gen Zers and previous generations is that technology is not foreign to them. And this is not just about devices; it is about applications as well. Over the years, people have been talking about consumerization of IT in many different ways, but it is fascinating to me, that the core of what a consumer business focuses on has never been brought to the enterprise. And that core is the drive for engagement.

If you talk to any consumer brand, engagement is what they strive for. If they have an engaged audience, they have an audience that will very likely be loyal and will generate them revenue. This can be the same in an enterprise where the final user of technology is indeed a consumer. So, why has that rarely been a focus in the enterprise? Maybe it is because someone’s job has never been seen as an engagement opportunity on tap that can be turned on and off. But how can you have high productivity with disengagement? And if you don’t have productivity, how can you run a successful business and have loyal employees? Gallup clearly shows how much money disengagement will cost you, but the impact goes deeper than that.

Multiple factors drive disengagement. But lack of the right tools, lack of data, lack of an understanding of what the business imperatives are as well as work processes that get in the way rather than facilitate someone’s task are probably the worst offenders.

…So Why is Consumerization Bad in Enterprise?

Consumerization of IT has always had a somewhat derogatory connotation. For many IT managers, consumerization meant providing devices and applications that were not designed for enterprises and therefore not as capable or as sophisticated and certainly not as secure as the tools that an enterprise would choose.

The reality, though, is quite different.

When you’re looking at critical devices that have been successful with consumers, such as smartphones, there’s not a lot of difference between a smartphone that I use for work and one that I use for personal use. Long gone are the days when we carried two phones, one for work and one for personal use. Security has been baked in at an acceptable level in many smartphone models because what consumers do today requires the kind of security that enterprises also demand.

When it comes to apps, consumerization takes a different meaning. It’s not just about security; it is about putting the user first and designing something that is above all user-friendly, because that user-friendliness will drive engagement. Design, however, has never been a priority for IT departments which is why at Citrix Synergy event in Atlanta, this week, it was fascinating for me to listen to how the new intelligent digital Workspace is delivering two essential components to driving successful digital transformation.

First, Workspace builds on your existing infrastructure but adds the support for micro-apps to a streamlined landing page that will allow enterprises to look at the current workflows and all the applications that are used to complete a task and intelligently streamline those processes and look at predictable steps in an efficient workflow. You can see how, when you add data-driven-intelligence, to this concept, enterprises could be able to deliver to employees a set of workflows that are, in reality, best practices catered to their specific needs. Imagine the impact that this approach could have on a new employee onboarding program, for instance. This idea takes a page out of the consumer book where more and more services and apps are using AI to delivered a personalized experience something that employees will come to expect in their work environment too.

The second key component that Citrix is also able to deliver is Citrix Analytics and specifically, being able to measure the performance of an app and the infrastructure around it as well as score employees’ user experience. This is a critical move in shifting the way enterprises should think about return on investment of initiatives aimed at improving employees’ TOMO and business efficiencies. More often than not, the way enterprises want to measure the return on investment is by measuring productivity improvements based on old parameters that are a misfit to the new tools. Mobile is an excellent example of this struggle. When the smartphone era started, IT struggled to measure the return on investment that a smartphone deployment would have on employees. Soft targets such as employee satisfaction, coming from being able to complete a task while away on business or being able to start the day on the long commute to the office, were hard to measure. It took years before enterprises began to see the value of higher engagement that mobile offered. Higher customer satisfaction, higher employee satisfaction driven by flexible hours and remote working..and the list goes on. Shifting the burden of the return on investment on the tool rather than the employee will help assure that enterprises will not just do lip service to transformation but really focus on improving workflows.

Employees are consumers of the technology as well as  customers of the IT department. The sooner enterprises start seeing them in that light, the easier it will be to put them first driving their engagement at work and making them the best evangelist for their brand.