Podcast: Microsoft Surface Go, Apple MacBook Pro, PC Market, Microsoft Teams

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Microsoft’s new Surface Go mini 2-in-1 device and Apple’s updated MacBook Pros, analyzing the recent the PC market shipment numbers, and talking about the latest version of Microsoft’s chat application Teams.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

OTT Television Will Get Worse Before It Gets Better

We are all both loving but are at the same time overwhelmed by the era of ‘peak TV’. A similar thing is happening with the evolution of how we watch TV. This is a landscape that is experiencing a glut of offerings, with more coming – and with it the inevitable fragmentation and customer confusion. If I look at the roadmap over the next couple of years, I see this becoming worse before it gets better. The question is, how might it get better, and who will be the winners (hopefully consumers) and losers?

First, how did we get here? For the 50+ crowd, it started with the VCR. For the 40+ crowd, things changed with DVRs and the whole notion of time shifting. But that landscape was still largely owned by the cable companies, who turned this into a new way to charge you an extra 10-15 bucks a month. The next major shift was the penetration of good enough broadband networks and Netflix’s big shift to online – and the bet on content and concomitant rise of other ‘over the top’ (OTT) options, from Hulu to Amazon to YouTube and a whole bunch of other ones. In recognition, and perhaps semi-capitulation, those who were being disrupted — Comcast, DISH, DirecTV — are also now disrupting themselves with a slew of the industry’s worst acronyms since ‘throttling’: ‘skinny bundles’, ‘vMPVDs’, and the like.

How are they doing it, you ask? In part, by stripping out some of the stuff you don’t want to pay for, but then occasionally want…such as live sports. Remember the Oscar-nominated song, “Blame Canada” from South Park? The Pay TV anthem should be “Blame the NFL! Blame ESPN!” for this whole confusing mess. If a neophyte/technophobe asks you the following question at a cocktail party: “I hear I can cut cable. What are the options?”  G’luck, you’ll be three drinks into it or they’ll have walked off.

And just when you think this might have sorted this itself out….when the vMPVDs (say that one out loud or try typing it ten times) like DirecTV Now and YouTube TV offer a critical mass of channels and functionality, so you only have to make eensey-weensy levels of compromise instead of God-awful [First World Problem-ish] levels of compromises, along come a whole lot of industry perambulations that are going to create a hot mess all over again. Love Netflix? Say goodbye to Disney and Pixar next year, and say hello to Disney’s own branded streaming service. Thought you had sports figured out or chose the vMPVD that had the most sports content? Say hello to ESPN+ (has anyone said hello to ESPN+?). Then, look at the daily headlines — AT&T-Time Warner (please, please, please don’t mess with HBO)! The battle for Viacom/CBS! The bidding war for 21st Century Fox! What’s Apple Gonna Do? — and the 20% – ish of you who have cut the cord, alongside the rest of you who are understandably deer in the headlights on this one, will have rapidly concluded that whatever choice you made isn’t exactly future-proofed.

And then, on your next trip outside the U.S., try to figure out what you can and can’t watch on your tablet (Netflix, mostly yes! Most other stuff, mostly no!).

Actually, what you might conclude, after skinny bundling it, plus the new (surprise!) unbundled price of broadband, plus Netflix, plus Hulu, plus Amazon, plus HBO, plus Showtime, plus some sports, plus the 3-4 things your particular vMPVD inevitably doesn’t have, plus the three sticks for your “not smart TVs”…end up being more expensive and a whole lot more trouble than if you’d just stuck with cable in the first place.  Or, plunge yourself back into the 1970s with an antenna (really, they’re selling like hot cakes) and actually watch This is Us when it’s on and with your family. Or, you could head in the Leave No Trace direction.

Kidding aside, there are some great things happening. Yes, Peak TV with 500+ new scripted shows this year alone, and untold billions being unleashed to create more content. Lots more choice of programming options and bundles. And there are steady improvements in on-screen UIs, programming guides, and even voice integrations such as Alexa to help make all this stuff a bit easier to sort, search for, and figure out.

But the business end of this is going to go through a lot of tumult over the next 3-4 years. First, there are 3-4 major deals involving major media/content companies, with more inevitably coming. Second, there will be landmark battles on rights fees, as the media landscape gets rearranged in this wave of M&A. Prediction: the sports leagues are in for a take down. Third, there’s going to be a shakeout in the whole vMPVD space, with 2-3 emerging as clear winners. How this will all play out, with the move by some properties such as Disney, ESPN, CBS, and so on, to their own direct-to-consumer offerings is anyone’s guess. Parenthetically, I think the direct-to-consumer approach, for all but a few properties, will be a disaster.

The end result, in my view, might be greater emphasis on a la carte, one-of type offerings. Subscribe to FX this month to binge a couple shows, then switch to Showtime next month. Buy a season’s worth of your favorite ball team through MLB, rather than choose the vMPVD that carries your local sports channel. All of this might be made easier by an evolution in the UI, in search, and better integration in the new generation of Smart TVs. This is a great project for the voice-driven assistants, like we’ve started to see with Alexa and Siri (but still has a long way to go). It’s also fun playground for AI, as long as it doesn’t get too creepy on us.

Consumers might save a few bucks along the way, but at the other end, they’ll be a lot more educated on the arcana of rights fees, the industry landscape, and what content is worth paying for. But it’ll be messy along the way. And it will get worse before it gets better.

 

 

Promise of Magic Leap AR now powered by NVIDIA

After four long, long years of development in which many in the outside world (myself included) doubted that the product would ever see the light of day, Magic Leap held an even this week to give some final details on its development kit. First, the Magic Leap One Creator Edition will be shipping “this summer” though nothing more specific was given. Pricing is still unknown, though hints at it being in the realm of a “high end smartphone” point to this as a ~$1,500 item.

For the uninitiated, Magic Leap is the company behind countless videos that appeared to show the most amazing augmented reality demonstrations you can imagine. The demos, some of which claimed to be captured rather the created, were so mind blowing that it was easy to dismiss them as fantasy. Through this series of live streams the Magic Leap team is attempting to demonstrate the capability of the hardware, leaving behind the negative attention.

Magic Leap showcased a demo called Dodge, in which the wearer combines the use of their hand as a pointing and action device with the real world. By looking around the room, a grid was projected on the floor, table, and couch, indicating it recognized the surfaces thanks to the depth camera integration. Using an ungloved hand to point and pinch (replicating a click action), the user is setting locations for a rock monster to that emerges from the surface in a fun animation. It then tosses stones your way, which you can block with your hand and push away, or move to the side, watching it floats harmlessly past – one time even hitting a wall behind you and breaking up.

The demo is a bit jittery and far from perfect, but it proves that the technology is real. And the magic of watching a stone thrown past your head and virtually breaking on a real, physical surface is…awesome.

The other new information released included the hardware powering the One. For the first time outside of the automotive industry, the Tegra X2 from NVIDIA makes its way into a device. The Magic Leap One requires a substantial amount of computing horsepower to both track the world around the user as well as generate realistic enough looking imagery to immerse them. The previous generation Tegra X1 is what powers the Nintendo SHIELD, and the X2 can offer as much as 50% better performance than that.

Inside the TX2 is an SoC with four Arm Cortex-A57 CPU cores and two more powerful NVIDIA-designed Denver2 ARMv8 cores. A Pascal-based GPU complex is included as well with 256 CUDA cores, a small step below a budget discrete graphics card for PCs like the GeForce GT 1030. This is an impressive amount of performance for a device meant to be worn, and with the belt-mounted design that Magic Leap has integrated, we can avoid the discomfort of the heat and battery on our foreheads.

The division of processing is interesting as well. Magic Leap has dedicated half of the CPU cores for developer access (2x Arm A57 and 1x NVIDIA Denver2) while the other half are utilized for system functionality. This helps handle the overhead of monitoring the world-facing sensors and feeding the graphics cards with the data it needs to crunch to generate the AR imagery. No mention of dividing the resources of the 256 Pascal cores was mentioned but there is a lot to go around. It’s a good idea on Magic Leap’s part to ensure that developers are forced to leave enough hardware headroom for system functionality, drastically reducing the chances of frame drops, stutter, etc.

The chip selection for Magic Leap is equally surprising, and not surprising. NVIDIA’s Tegra X2 is easily the most powerful mobile graphics system currently available, though I question the power consumption of the SoC and how that might affect battery life for a mobile device like this. Many had expected a Qualcomm Snapdragon part to be at the heart of the One, both because of the San Diego company’s emphasis on VR/AR and mobile compute, but also because Qualcomm had invested in the small tech firm. At least for now, the performance that NVIDIA can provide overrides all other advantages competing chips might have, and the green-team can chalk up yet another win for its AI/graphics/compute story line.

There is still much to learn about the Magic Leap One, including where and how it will be sold to consumers. This first hardware is targeting developers just as the first waves of VR headsets did from Oculus and HTC; a necessary move to have any hope of creating a software library worthy of the expensive purchase. AT&T announced that it would the “exclusive wireless distributor” for Magic Leap One devices in the US but that is a specific niche that won’t hit much of the total AR user base. As with other VR technologies, this is something that will likely need to be demoed to be believed, so stations at Best Buy and other brick and mortar stores are going to be required.

For now, consider me moved from the “it’s never going to happen” camp to the “I’ll believe it when I try it” one instead. That’s not a HUGE upgrade for Magic Leap, but perhaps the fears of vaporware can finally be abated.

Surface Go: Judging a Product by the Market

Microsoft just launched the Surface Go,  a 10” Surface powered by the 7th Generation Intel Pentium Gold Processor, in a fanless design, offering 9 hours of battery, priced at $399.

Size is always a difficult topic to discuss when it comes to tablets. The balance between how much screen real-estate you need to be productive and how little you want to be mobile is a very tricky one and a very personal one too. I always wanted a smaller Surface to better balance work and play. So far I have been treating Surface Pro more like I would my PC/Mac than my 10″ iPad Pro. Of course device size is only one part of the equation when what you want is balance work and play as apps availability plays a prominent role in reaching that balance.

What cannot be argued are the greater affordability of a sub $500 Surface and the opportunity that it opens to create a broader addressable market in the enterprise, the consumer, and the education segment.

Lots changed since Surface RT

I know there is a temptation to think about Surface Go as a Surface RT’s successor and dismiss it even before it launches, but I ask you not to do so just yet. I say this, not because Surface Go is running on Intel and not ARM but because a lot has changed in the market since 2012 and even since Surface 3.

When Surface RT hit the market, Microsoft was responding to the success the iPad was having in the consumer market as well as the hype around tablets taking over the PC market. Beyond early adopters, however, consumers were still figuring out if there was space in their life between their smartphone and their PC – many are still dueling on that today – and enterprises were trying to understand if employees could actually be productive on a device that was not a laptop and with an operating system, Windows 8, that was not optimized for touch. Moreover, Surface was a new brand for consumers and Microsoft an unproven supplier for the enterprise market.

I don’t need to remind you that Surface RT was a flop and Microsoft went back to the drawing board to bring to market a more affordable Surface to accompany Surface Pro. Fast-forward to 2015 and Surface 3 proved to find its place in some enterprises as IT departments were buying more into the 2in1 trend and felt more comfortable with its Intel architecture.

Surface Go aims at providing an upgrade path for Surface 3 users. It also looks at broadening the Surface reach into the enterprise through first line workers and, more broadly, users who might not need all the horsepower of a Surface Pro but do not want to compromise on the hardware design. Adding Surface Go to the portfolio assuring consistency of experience and fidelity of apps were probably the biggest drivers to sticking with an Intel architecture at a time when Windows-on-ARM is getting off the blocks.

The ‘once bitten twice shy’ Surface team prioritized capitalizing on a small but solid base with a known formula for now and will probably wait for the Snapdragon 1000 to broaden the appeal to users who might be prioritizing mobility and a more modern workflow over legacy. As disappointing as this might be for BYOD users and consumers this was the safest bet to get IT buy-in.

Beyond Enterprise

A lot has been written about Surface Go being a reinvigorated effort on Microsoft’s part to go after iPad, and how could it not, given iPad remains, 8 years in, the most successful tablet in the market. While eroding iPad’s market share would be a welcomed bonus, I think there is market share to be had within the Windows ecosystem first.

Considering price and design as the only two ingredients a product must have to tackle the iPad dominance in the tablet market ignores a crucial factor in the iPad’s success: apps. Apps make up a big part of the experience users buy into when using iPad and iPad Pro. This, in my opinion, is still Surface’s weakest link when it comes to broadening its reach into the consumer space.

When we look at the Windows 2in1 market, design still leaves a lot to be desired, especially when it comes to the overall package of screen quality, keyboard, and pen experience. This is especially true of the new Windows on ARM devices which boast excellent battery life and LTE connectivity but do not seem designed with mobility first in mind.  While many Windows manufacturers continue to dismiss the success of Surface based on market share it is clear that brand awareness and satisfaction has grown significantly. Throwing Surface Go in the mix of options consumers and students getting ready for back to school have is not a bad thing.

Looking forward to Surface Go 2

I feel Surface Go landed on a design and price point that offer a lot of promise. Thinking ahead to a more mature Windows on ARM, a Qualcomm Snapdragon 1000 and users finally ready to benefit from connectivity anytime anywhere, it is hard to see how Surface Go 2 (or whatever it will be called) would not offer an alternative to the current Intel architecture. And I must say I look forward to that!

Dual Geographic Paths to the Tech Future

While technology has certainly made the world a smaller, more connected place, it’s becoming increasingly clear that it hasn’t made it a more unified place. Even in the realm of technology advancements—which are generally considered to be apolitical and seemingly independent of geographical boundaries—important regional differences are starting to appear. More importantly, I believe these differences are starting to grow, and could end up leading to several distinct geographic-driven paths for technology and product evolution. If this geographical splintering happens, there will be a profound impact on not just the tech industry, but industries of all types around the world.

Having spent the last week in Beijing, China to attend the Create AI developer conference hosted by Baidu (commonly referred to as the Google—or more precisely, the Alphabet—of China), some of these geographic differences started to come into focus. In the realm of autonomous driving, for example, it’s clear that Baidu’s Apollo platform for autonomous cars is targeted at the Chinese domestic market. While that’s perfectly understandable, the distinct character of everything from the slower speeds at which Chinese cars tend to drive, to Beijing’s nearly impassable hutong alleyways are bound to create technology requirements and developments that may not be relevant or applicable for other regions.

In addition to autonomous driving, there’s been an increasing focus by the Chinese government to create more native core tech components, such as unique semiconductor designs, over the next several years. The “Made in China 2025” plan, in particular, has put a great deal of attention on the country’s desire to essentially create an independent tech industry infrastructure and supply chain.

One of the reasons for the appearance of these regional fissures is that technology-based products have become so integrated into all aspects of our society that governmental agencies and regulatory bodies have felt the need to step in and guide their deployment and development. Whenever that happens in different countries around the world, there are bound to be important differences in the directions taken. Just as several hundred years of local cultural norms have driven trade policies, business rules and the evolution of unique societal standards in each country, the local interpretation and guidance of key technology advancements could lead to important variations in future technological culture and standards around the world.

While these regional technology differences might not happen in a truly united world environment, they still could in one that’s merely well connected. In other words, old-world historical and cultural differences between countries or regions could prove to be a much bigger factor in the evolution of technology products than many have previously considered.

A practical example is being highlighted by the current trade wars between the US and China. Admittedly, when you consider the issues at a high level, there are a wide variety of concerns underlying the latest trade maneuvering, but for the tech world, much of it boils down to each region wanting to deter the influence or participation of major companies from the “other” side in their home country. We’ve already seen this play out with companies like Google and Facebook being banned in China, and the US blocking the use of Huawei and ZTE telecom equipment and China Mobile from participation in US markets.

In addition to these big picture differences, there other more subtle factors that are influencing tech-related relations between the countries as well. For example, many large Chinese tech companies, including Baidu, are squarely focused on Chinese domestic market needs and show little concern for other potential regional markets around the world. Given how large the Chinese domestic market is, this certainly makes business sense at many levels, but it’s noticeably different from the more global perspective that most major US tech companies have. (For the record, some Chinese-based companies, like Lenovo, do have a global perspective, but they tend to be in the minority.)

The practical result of this region-specific focus could end up being a natural selection-type evolution of certain technologies that creates regional “species” which have crucial differences from each other. Hopefully the gaps between these regional technological species can be easily overcome, but it’s not inconceivable that a combination of these differences along with regionally driven regulatory variances (and a dash of politically driven motivations) end up creating a more technically diverse world than many have expected or hoped for.

To be clear, the vast majority of current technological developments are not being geographically limited. Plus, there are still many great examples within the tech industry of companies from different regions working together. At the Baidu event, for example, Intel was given a large chunk of time during the main keynote speech to highlight how they are working with Baidu on AI. The two companies talked about the fact that Intel silicon is still a key part of how Baidu plans to drive its Paddle Paddle AI framework and overall AI strategy moving forward—despite the announcement of Baidu’s AI-specific Kunlun silicon.

We are, however, reaching a point in the worldwide tech industry’s development that we can no longer ignore potential regional differences, nor assume that all tech advancements are following a single global path. Given the incredible potential influence of technologies like AI on future societal developments, it’s critical to keep our eyes wide open and ensure that we guide the path of technology advancements along a positive, geographically inclusive route.

Baidu’s Big AI Rollout Includes a Partnership With Intel

BEIJING, CHINA – AI is well known as a hot area of innovation from the likes of Google, IBM, Microsoft and – Baidu? You may not have heard of the Chinese tech giant but it’s starting to make waves across a range of product areas including advanced chips, robotics, autonomous vehicles and artificial intelligence.

At its annual Baidu Create conference here, Baidu announced a partnership with Intel, an advanced chip architecture of its own design and a new version 3.0 of its Baidu Brain AI software.

As part of the Intel partnership, Baidu announced Xeye, a new camera designed to help retailers offer a more personalized shopping experience. The camera uses Baidu’s advanced machine learning algorithms to analyze objects and gestures as well as detect people in the store. It also leverages Intel’s Movidius vision processing units (VPUs) to give retailers low-power, high performance “visual intelligence” as to products and activity in the store.

Separately, Baidu is improving machine vision performance via its EasyDL, an easy-to-use “code free” platform designed to let users build custom computer vision models with a simple drag-and-drop interface. Released in November as part of Baidu Brain 2.0, EasyDL applications are being used by 160 grocery stores in the U.S. including Price Chopper. The computer vision application recognizes items left in a customer’s shopping cart by mistake to help ensure that they’re purchased.

The newer Baidu Brain 3.0 makes it easier and quicker to train a computer vision model using EasyDL so, for example, the application designed for the grocery cart can now be developed in as little as 15 minutes.

In addition to Xeye, Baidu also announced it will use Intel’s FPGAs (Field Programmable Gate Arrays) to enable workload acceleration as a service on the Baidu Cloud. “The best is yet to come. We are excited to see the innovative Baidu Brain running on Intel Xeon processors,” said Gadi Singer, general manager of Intel’s AI Products Group who joined Baidu CEO Robin Li on stage.

But Baidu has big chip plans in its own right. During his keynote, Li announced Kunlun, China’s first cloud-to-edge AI chip, designed for high performance AI scenarios. Li said Kunlun will be marketed for use in data centers, public clouds and autonomous vehicles.

Baidu started developing an FPGA-based AI accelerator for deep learning in 2011 and began using GPUs in datacenters. Kunlun, which is made up of thousands of small cores, has a computational capability which is nearly 30 times faster than the original FPGA-based accelerator.

And while the initial market for Kunlun will be China, Technalysis Research analyst Bob O’Donnell said enterprises across the globe would be wise to be aware of Baidu’s growing product portfolio.

“Baidu is a key player for multinational corporations with a presence in China because they’re driving innovation in the same way that Amazon or Google is in the U.S.,” said O’Donnell. “They have an incredibly strong focus on AI across a lot of different industries that’s as broad as any other company I know of. Right now they’re very China-focused, but I expect that to expand over time.”

Chip rivals like Nvidia have made huge strides in support of autonomous vehicles with both hardware and software frameworks and simulation software for testing designed to help car makers get vehicles to market.

Similarly, Baidu has made a big investment in its Apollo software for autonomous vehicles of all sizes, from automated wheelchairs to cars, buses, trucks and other transport vehicles. At Create it showed off the new Apollo 3.0 software that is just starting to be used in autonomous vehicles in campuses and other closed environments in China such as senior living communities.

“We are really excited, this will surely change everyone’s lives,” said Li, who announced the 100th autonomous buses had recently come off the assembly line.

“You can see this is a real automatic driving solution, there’s no steering wheel, brake pedal or throttle, but it’s also very stylish inside,” said Li.

The vehicles are planned for commercial use in both China and Japan.

Analyst O’Donnell said it looks like Baidu’s autonomous vehicle effort is focused on the Asian market for at least the near term. “But they’re really establishing some important benchmarks here with the breadth of what they’re doing that competitors are sure to take note of.”

Answering the Critics of Device as a Service

The concept of Device as a Service (DaaS) has been gaining steam for a few years now, and my team at IDC has done extensive work around this topic. In fact, we’re currently wrapping up an extensive study on the subject that includes a massive multi-country survey of commercial adopters, intenders, and resistors, as well as a forecast that will include our view on the impact of DaaS on the commercial PC, tablet, and smartphone markets. While the momentum in this space is clear, there are still plenty of doubters who like to throw out numerous reasons why DaaS won’t work, and why it won’t bring about the benefits to both buyers and sellers that I’ve outlined in previous columns here and here. Let’s examine some of those criticisms.

There’s Hype, But Is Anybody Really Buying?
The hype defense is probably the most common pushback and question we get when it comes to DaaS, and it’s easy to understand why the average IT professional or even industry insider might be skeptical. But the fact is, we’ve now surveyed hundreds of IT Decision Makers (ITDMs), and talked to most of the major providers, and this isn’t just an interesting idea. We continue to find that DaaS is very appealing to a wide range of organizations, in numerous countries, and across company sizes. The idea that a company can offload some of the most mundane tasks that its IT department deals with while right-sizing the devices it deploys, gathers users analytics, and smooth’s out costs is very compelling. And as the industry has moved quickly from a focus purely on PCs to one that includes additional devices such as smartphones and tablets, interest and adoption will continue to grow.

It’s important to note that even a company completely sold on DaaS won’t make this type of transition overnight. Most companies will start small, testing the waters and getting a better understanding of what works for their organization. In the meantime, there’s existing hardware, software, and services contracts that could still have months or even years left before they expire. Like many things in technology, you can expect Daas adoption to happen slowly at first, and then very fast.

DaaS Costs Are Too High
One of the key areas of criticism leveled at DaaS is that today’s offerings cost too much money per seat. It’s hard to argue with this logic: If an organization thinks DaaS costs too much, then it costs too much, right? But often this perception is driven from an incomplete understanding of what a provider includes in the DaaS offering. Today’s contracts can run from just the basics to something much more complete. Yes, a contract with a full range of services such as imaging and migration, deployment and monitoring, break/fix options and secure disposal can be pricey. But what critics often fail to realize is that their company is paying for these services in some way or another today. Either they’re paying their own IT staff to do it, or they’re paying another service organization to do bits and piece of it (and they’re likely not tallying all the costs in one place). Alternately, some of these tasks—such as secure disposal—aren’t happening at all, which is one of those areas that could end up costing the company a lot more money in the end.

Now with all that said, it’s entirely possible that at the end of the day a company may well end up paying more for its entire pool of services under a DaaS contract. At which point, the questions they need to ask: Am I buying my DaaS service from the right vendor? If the answer to that question is yes, then the follow-up question should be: Are the benefits of managing all these services through a single provider worth the extra cost to my organization? Does it free my IT organization to do other important jobs? The answer may be yes.

Refurbs Will Negatively Impact New Shipments
One of the key benefits of DaaS is the promise that it will shorten device lifecycles which have always been, to me, one of the win/win benefits of this concept. Companies win by replacing employee’s hardware more often thanks to pre-determined refresh cycles. Instead of finding ways to keep aging devices around for “one more year” to push out capital expenditures, DaaS allows companies to equip employees with newer machines that drive increased productivity, offer improved security, and lead to improved user satisfaction. From the hardware vendor side, the benefits are obvious: faster refresh rates that become more knowable over time.

But what about all those PCs collected at the end of a two- or three-year DaaS term? Won’t they cannibalize shipments of new PCs? The fact is, today there’s already a huge business around refurbished PCs, tablets, and smartphones. What the DaaS market could do is create a much more robust, high-quality market of used commercial devices. As with the automobile leasing market, these devices receive regular maintenance, which means a higher quality used product. DaaS providers can redeploy (or sell) these into their existing markets at lower-than-new prices and still drive reasonable profits. Or they can target emerging commercial markets where even ultra-low-cost devices are a tough sell today.

Ultimately, I believe that DaaS will prove to be a net positive in terms of overall shipments for the industry. Even if that proves incorrect, I’m confident it will drive greater profitability per PC for vendors participating in the DaaS market.

DaaS Will Never Appeal to Consumers
It’s true that to date DaaS has been focused on the commercial segment, but its only a matter of time before we see consumer-focused plans come to market. Apple’s success with the iPhone Upgrade Program, where you pay a monthly fee that includes AppleCare+ coverage and the promise of a new iPhone every year, shows there’s already an appetite for this. It also proves that a robust secondary market doesn’t necessarily cannibalize a market (and Apple profits greatly from its resale of one-year-old iPhones). You can easily imagine Apple adding additional services to that program and extending it to include two or three-year upgrade paths for iPads and Macs.

And so it’s not hard to imagine the likes of Apple, HP, Dell, Lenovo and others eventually offering consumer-focused DaaS products. To many, the idea of paying a single monthly fee to one company to eliminate most of the hassle of managing their devices—and to ensure no budget-busting costs when its time to replace an old one—would be too good to pass up.

Memory market and Micron continue upward trend

One of the darling tech stocks of the last year has been Micron, a relative unknown in the world of technology compared to names like Intel, NVIDIA, and Samsung. With a stock price driven by market demands, and increasing over 90% in the last calendar year, there are lots of questions about the strength of Micron in a field where competitors like Samsung, and even Intel, are much bigger names.

Last month Micron earnings were eye opening. For its fiscal Q3 it had a 40% increase in revenue over Q3 the previous year. Even more impressive was a doubling of profit in that same period. The quarterly results had $3.82B in net income on $7.8B in revenue, with a Q4 forecast of $8.0-8.4B.

NVIDIA, by contrast, had $3.2B in revenue last quarter. Yet the GPU giant is getting more attention and more analysis than a company more than twice its size.

As part of the earnings announcement, Micron CEO Sanjay Mehrotra expressed confidence in the continued demand for memory as well as the ability for Micron to maintain its profit margins with consistent pricing. This was directly addressing a key segment of the financial analyst group that continue to worry that memory demand will dry up and limit the growth potential for Micron. Micron is at a higher risk in that scenario because of its singular focus on memory technology while competitors like Samsung and Intel are diverse.

This Boise, Idaho based company has to answer the same question as the rest of the memory vendors in the tech field: will demand for memory abate with product shifts in the market or when the build capacity catches up?

There are several reasons why we could see demand for both DRAM (system memory) and NAND (long term storage) memory slow down. By many measures the smartphone market has peaked, with only developing nations like China and India still increasing unit sales, but with much lower cost devices. China sales of phones are in flux thanks to trade war and tariff concerns – Qualcomm and Micron are both US-based and are major providers of smartphone technology. The Chinese government is investigating into memory price fixing accusations against all major vendors and a poor outcome there could incur major penalties and unpredictable changes to the market.

But the Micron CEO doesn’t believe those factors will win out, and neither do I. For the foreseeable future, DRAM demands will continue to grow with mobile devices as we increase the amount of memory in each unit. The coming explosion of IoT products numbering in the billions will all require some type of system DRAM to run, giving Micron and others a significant opportunity to grow. And we cannot forget about the power of the data center and, in particular, the AI compute market. NVIDIA might be the name driving the AI space but every processor it builds will require memory, and massive amounts of it.

In the NAND market for SSDs, there is a lot of competition. But Micron benefits from the OEM arrangements as well as the push into more vertical integration, selling direct to consumers and enterprise customers. Micron has made a push to counter the DIY and OEM dominance of Samsung SSDs with its own Crucial and Micron-branded options, a move that is improving with each generational release.

As more customers migrate from spinning hard drives in their PCs and servers to larger capacity solid state drives that are faster and more reliable, there remains a sizeable opportunity for memory vendors.

If demand will continue to increase, capacity remains the next question. When AMD was building its Vega GPU and utilizing a new memory technology called HBM2, the product suffered because of availability. Though Micron was not playing in the HBM (high bandwidth memory) space, it is a recent example of how the memory market is trying to play catch up to the various demands of technology companies.

There are additional fab facilities being built, but if it seems like they aren’t bringing them up as fast as they could, you aren’t alone. New fabs will alleviate capacity concerns but it will decrease pricing and lower margins, something that any reasonable business will be concerned about in volatile markets.

Over the decades of memory production, the market was cyclical. As technologies move from generation to generation, the demand would plummet, followed by higher prices associated with the NEXT memory technology. As use of that memory peaked and fell, the cycle would restart anew. But because of the growth and demand for memory products of all kinds, and the segments of extreme growth like AI and IoT, it looks like this pattern will be stalled for some time.

The Changing Relationship Between People and Technology

Despite all the clichés about the challenges of change, the truth is that it can be difficult for people to accept impactful alterations to the way they do things. This is particularly true in the mode of interaction we have with technology-based products because many of the changes occur in ways that aren’t immediately obvious. As tech gadgets, applications and services start to impact more aspects of our lives, there’s a growing awareness of the potential harmful impact of overusing tech-related products.

Together, these issues point to the fact that we’re entering an interesting a new phase in the relationship that people have with technology. Interestingly, I expect this phase to be significantly more impacted by common traits of human nature than in the past. Why? Because for technology products to continue to evolve, companies who create them are going to have to be more cognizant of how those products either influence or are influenced by human factors that they haven’t had to think much about in the past.

The way that people now think about and interact with technology is changing and successful companies need to be mindful of these new perspectives. Part of the reason for this shifting perspective is the fact that our increased exposure to tech-based products has altered our expectations. Many of the changes we encounter with today’s new tech products are subtle evolutions of existing products and are hard to appreciate. Over time, a collection of these changes can certainly produce a notable difference but in this age of relatively advanced, mature product categories, individual features or new hardware models often don’t make much of an impression anymore.

Of course, sometimes individual products or even features can have an immediate and profound impact—if not on the market overall, then at least with certain subsections of the market. Smart speakers, such as Amazon’s Echo or Google Home, for example, are arguably one of the better recent examples of this phenomenon, as they quickly became a commonly used device in households all over the world.

While there’s no single (or simple) answer as to what makes certain devices, applications or services have a broader, faster impact than others, I’d argue one consistent thread across most of them is that they connect to some fundamental aspect of human nature better than others. Whether it be a more intuitive means of interaction (as voice-based smart speakers have enabled), more intelligent and accurate means of analyzing information (as AI-based software tools have started to do), or some other type of capability that a wide variety of people can easily relate to, it’s the right kind of connection to how people think and act that helps products be successful.

Conversely, ignoring key aspects of human nature can prevent certain products from either achieving the level of success that some expect, or from evolving in ways that many predict. At a very basic level, many people don’t really like dramatic changes, as mentioned earlier, particularly when it comes to their technology-related products and services. While some of the resistance to change is certainly age-based, the recent challenges Snap faced when they dramatically modified their app’s user interface shows that even young people can be resistant to significant alternations to their technology products.

In the business world, resistance to major changes in technology is particularly pronounced. That’s why, for example, there are still a lot of companies running 1970s- and 1980s-era mainframes and plenty of other much older software. It’s easy to get caught up in the sleek technology of today’s cloud-based, microservice-enabled software environments, but even in the most advanced organizations, those tools typically only represent a tiny fraction of what they’re actually running across the whole company. Tech companies who ignore those realities and don’t provide tools to ease the transition process between older tools and newer ones (after all, it’s human nature to look for an easier way to get a task done—or just avoid it if it appears overwhelming), are bound to face significant challenges.

For consumers, the influences of human nature appear and the impact it has on interactions with technology-driven products and services manifest themselves in an enormous variety of ways. A recent, somewhat controversial, example involves self-driving cars. While few would argue conceptually about the value of autonomous driving features, there are serious and profound questions about how realistic it is to offer semi-autonomous capabilities (such as Tesla’s AutoPilot mode) in cars. As several recent tragic accidents have shown, when people believe they don’t have to actually pay attention while sitting behind the wheel, many stop doing so. The idea that people are going to maintain the level of concentration and focus necessary to very quickly take over in the event of a sudden change in the driving environment goes completely against human nature. In my mind, that makes these semi-autonomous capabilities potentially even more dangerous than regular human-powered driving because it lulls people into a false sense of security. Until the cars can be completely autonomous and require zero interaction on the part of the driver and passengers inside—a capability that’s still a ways off according to most—offering features that go directly against human nature is a mistake.

There are other instances of how human interactions with technology are not as easily defined and well understood as many think. Most people are creatures of habit and once they get comfortable interacting with devices, applications or services, they’re not overly eager to change. One of the most interesting examples involves PCs and tablets. Though many people were quick to write off both desktop and notebook PCs as dinosaurs once tablets like the iPad arrived, it’s now clear that a keyboard-based interaction model is still the preferred method for interacting with powerful computing devices—regardless of the user’s age.

Yet another way in which people’s relationship with technology products and services has changed is just starting to be felt, but I believe it could end up being one of the more profound adjustments in how people interact with technology. In a classic case of “be careful what you wish for,” technology has now given us the ability to have access to most all of the world’s information and all of the world’s people at any time. While that conceptually sounds like an amazing accomplishment—and certainly, in many ways, it is—the harsh reality of that capability is a world that’s grown further apart as opposed to closer together, as most presumed this capability would enable. While the reasons for the growing separation are many and complex, there’s no question that the overuse of technology is starting to take its toll.

As our usage of technology and its influence on all aspects of our lives continues to increase, it’s also leading more people (and companies) to do more public soul-searching about how people interact with tech products and what their expectations for those interactions should be. Just as we’ve started to see ethical questions being raised in the medical field based on advancements in technology there, so too are we starting to see questions about whether “because we can” is really going to be an acceptable answer for many types of advancements in tech.

Despite these concerns, there’s no question that has technology has had a profoundly positive impact on both our personal lives and the world around us. Even potentially challenging upcoming technology advancements like AI (artificial intelligence) are more likely to have a profoundly positive impact for most people than a negative one. But as with virtually everything in life, common sense and human nature tells us that sometimes there’s just too much of a good thing. More importantly, tech companies that can adjust their strategies to better accommodate the growing sophistication in the relationship between people and their products should be posed for a more successful future.

Apple’s Most Strategic Investment So Far This Year

There has been a lot of speculation lately that Apple is getting ready to create some media bundle that would be under subscription. According to multiple sites, the idea would be to bundle all of their media properties under a special program that mirrors something like Amazon’s Prime services.

I have no direct knowledge that this will happen but if you read the tea leaves surrounding Apple’s various acquisitions and new media emphasis, it is not too hard to see this possibility.

With that in mind, their most strategic investment so far this year that could be related to this is Texture, the magazine subscription service that has close to 200 magazines in this service for $9.99 a month. I am a big fan of Texture and use it almost daily to read highlighted articles they put in a particular article overview section as well as actual magazines I like to read, especially the food, sports and news magazines available.
The service itself has grown from about 20 magazines at launch to the 200+ available today, and now that Apple owns it, I suspect it will add many more new magazine titles to its offering in the future.

Recently, Apple has reintroduced a more robust version of their free News app, which includes human curation as well as ML and AI-based algorithms that try to block fake news and only deliver well researched and well-written stories that would be of interest to their customers.

I had a briefing on this News service when it relaunched and was highly impressed with the journalistic talent Apple has brought in to help curate this site as well as contribute to its content. In the past, I had not spent a lot of time in the Apple News app, but since it is now placed on the home screen of my iPhone X with the iOS 12 public beta, I find myself checking it multiple times a day to keep up with the news.

These two apps have become vital to me since getting fact-based news, and quality content has become critical in these days where fake news and political bias is blasted over social media and Facebook, Twitter and other sites that struggle with keeping this type of rhetoric in check.

Indeed, what has become essential to many is the ability to go to a site and know that what is there is the result of well-researched journalism and, at least in the news reporting sense, the stories they post are well written by professional journalists and can be trusted to be accurate.

Now that Apple also has Texture, it would be smart for them to integrate part of their News service into the Texture subscription service and innovate on both fronts. These combined properties could become vehicles for more in-depth stories, documentaries and even commentary that would be based on more personalized preferences. Apple is also big into AR. This could be the place where they integrate AR content and visual views and functionality in the News area which they control or even create dedicated AR/MR based magazines that would be part of Texture.

What makes this possible is that Apple has created a set of platforms that can easily reside on top of their already successful subscription services like iTunes and Apple Music. While Texture has its own subscription platform today, Apple could either integrate that into their existing subscription platform or take the magazine content and add it to their subscription infrastructure that they have now.

However, if they were to follow Amazon’s Prime example, then any service would also need to have video content too. Apple is way behind their competitors when it comes to video offerings and especially homegrown content, and they would need to be more aggressive in acquiring or creating much more movie, TV and theatrical shows that would be able to compete with Amazon, Netflix and others.

In our Techpinions Podcast last weekend, Ben Bajarin and Arthur Greenwald, a well known Hollywood producer who co-produced the Mr. Rogers show and knows a great deal about creating original content, gave a good perspective on the challenges Apple has when it comes to attracting talent and getting quality original shows to market.- Ben, summarize Arthurs key points here:

  • Apple’s challenges will be dealing with content failures and Hollywood’s “characters” who sometimes get into trouble socially
  • Apple is still experimenting and does not know if they want to be a network (like ABC) or premium channel (like HBO)
  • There are baseline metrics for success that indicates a shows return on investment. Apple’s bar may be different but hits are few and far between in network television.
  • Global success is a gold mine with content if it can be achieved. Apple’s potential upside is the global scale of their customer base.

Given these challenges, I would not be surprised to see Apple buy a couple of the smaller video production companies that already have hits in the market to accelerate their original programming. Also, given their tight relationship with Disney and Pixar, I could even see them trying to tap into some of their skills and content as well.

However, well planning by Apple when it comes to subscription infrastructure puts them in an excellent place to deliver an encompassing media subscription service, which makes it somewhat likely they will offer something like this in the near future. More importantly, with the right mix of content services and priced competitively, it could add significant revenue to their services program and bring more people into Apple hardware, software, and services ecosystem.

While still mostly a rumor, odds are that Apple is moving in this direction and Texture and an even more innovative News application could become one of the cornerstones of any subscription service they offer.

Top Tech Things We Take For Granted

Let’s face it, this has been a crummy year for tech. From the exposition of outright fraud (Theranos), shoddy business practices, numerous examples of inappropriate (and worse) corporate and workplace behavior, data and privacy breaches, concern about the ‘bigness’ and ‘dominance’ of certain companies, worries about about screen addiction…the list goes on. But as we close out the first half of the year and head into the July 4th holiday, perhaps it’s not a bad exercise to step back and recognize some of the good things about tech.

This is not a review of “top apps” or “best gadgets”. Rather, this is my own, admittedly subjective list of some everyday apps, tools and capabilities that are just plain important and useful to most consumers. There are surely downsides to each of these, but a good gauge is how much you would miss them if they suddenly disappeared.

Google Maps. I marvel at just how well Google Maps generally works, and how it just continues to improve, without fanfare. Just think about how generally accurate it is, and how many major and minor features have been introduced that make the Google Maps increasingly useful. There isn’t a huge amount of competition for Google Maps, and nobody really cares.

Smartphones. No doubt there are downsides to the smartphone. But step back and just think about how many different things can be done on this little pocket computer. Even mid-priced smartphones are fantastic. And, given how many hours a day smartphones are used and how many functions they perform, it’s remarkable how generally reliable they are.

WordPress. There are many terrific publishing platforms and content management systems. But WordPress is the granddaddy. It has enabled tens of millions of individuals and small businesses to set up beautiful, highly functional websites with relatively little training. There’s a great ecosystem of add-on tools and features.

Content-a-Looza. We’re all highly aware of how digital and the internet is impacting huge industries, such as print media, publishing, and so on. Not to diminish that at all, but on the opposite side, it’s amazing how low the barriers are to both creating and publishing content across multiple forms of media. Consider how quickly and easily one can publish a long-form story on Medium, upload an innovative clip to YouTube, get a song onto SoundCloud, start a podcast with a colleague, etc. And the hardware and software tools to enable these creations are just so much cheaper and more accessible than they used to be. Sure, there’s a lot of crappy content out there, monetization is challenging, traditional curation and entire industries are being up-ended — but on the other hand, there’s the rise of an entire creative class, be it profession or hobby, that might not have ever been.

Wikipedia. The content might not always be 100% accurate or up-to-date, but Wikipedia is incredibly useful, offering generally good content across a huge number of topics and categories. That it’s a non-profit and exists on average $15/year donations from millions of people is also testament to some of the good things about the Internet. And 99% of people have no idea how the content gets up there…it’s just there.

Travel Apps. On the one hand, the UI of the leading travel apps haven’t changed in, seemingly, a decade. On the other hand, if suddenly a business trip landed in your lap, you could book a flight, hotel, and car – at reliably competitive prices – in less than 10 minutes, and in  fewer than 20 total clicks. Seriously, try it. Consider what has to really happen at the back end to make it all happen. And how frequently things change. Dizzying.

The Cloud. I speak about the Cloud here from a consumer, not a business standpoint. It’s probably the most game-changing framework since the advent of the PC. Consider that, ten years ago, if your PC crashed it was a complete disaster. Now, if you’ve taken the right precautions, the PC itself is practically disposable, since everything is stored elsewhere. The cloud has also helped unleash competition to what were seemingly entrenched businesses: think Quicken to Mint, iTunes to Pandora/Spotify, Outlook to Gmail, and the world of streaming content. All of these would be nearly impossible without the unfathomably steep drop in the price of storage and the industry’s nearly universal embrace of this new business framework.

Crowdfunding. Perhaps this is a personal favorite, but I think crowdfunding represents some of the best possibilities of tech and the internet. Crowdfunding has helped fund millions of people/projects that never would have had a chance of getting financed. The projects tend toward the creative side, which is great. Crowdfunding offers a nearly instant feedback loop on an idea’s viability (and not always correct, on either side, but that’s life, too). I’m also impressed that so many people give to projects when what they get back is relatively minor or nothing at all. We see people’s optimistic, generous, and also gullible sides, exposing among the more human sides of the Internet.

There are downsides to everything, and certainly good cause for conversation about the big picture impact of tech. But it is a useful exercise to occasionally step back and appreciate how effective and useful some of this stuff is, and to applaud the millions of bright, honest, hard-working people who helped create it. Happy 4th.

Can Qualcomm Hit Intel with Rumored Snapdragon 1000 Chip?

Over the course of the last week or two, rumors have been consistently circulating that Qualcomm has plans for a bigger, faster processor for the Windows PC market coming next year. What is expected to be called “Snapdragon 1000” will not simply be an up-clocked smart phone processor and instead will utilize specific capabilities and design features for larger form factors that require higher performance.

The goal for Qualcomm is to close and eliminate the gap between its Windows processor options and Intel’s Core-series of CPUs. Today you can buy Windows 10 PCs powered by the Snapdragon 835 and the company has already announced the Snapdragon 850 Mobile Compute Platform for the next generation of systems. The SD 835-based solutions are capable, but consumers often claim it lacks some of the necessary “umph” for non-native Arm applications. The SD 850 will increase performance in the area of 30% on both processor and graphics, but it likely will still be at a disadvantage.

As a user of the HP Envy x2 and the ASUS NovaGo, both powered by the Snapdragon 835, I strongly believe they offer an experience that addresses the majority of consumers’ performance demands today, with key benefits of extraordinary battery life and always-on connectivity. But more performance and more application compatibility are what is needed to take this platform to the next level. It looks like the upcoming Snapdragon 1000 might do it.

The development systems that lead to this leak/rumor are running with 16GB of LDDR4 memory, 256GB of storage, a Gigabit-class (likely higher) LTE modem, and updated power management technology. It also is using a socketed chip, though I doubt that would make it to the final implementation of the platform as it would dramatically reduce the board size advantage Qualcomm currently has on Intel offerings.

Details on how many cores the Snapdragon 1000 might use and what combination of “big.LITTLE” they might integrate are still unknown. Early reports are discussing the much larger CPU package size on the development system and making assertions on the die size of a production SD 1000 part, but realistically anything in the prototyping stage is a bad indicator for that.

It does appear that Qualcomm will scale the TDP up from the ~6.5 watts of the SD 835/850 to 12 watts, more in-line with what Intel does with its U-series parts. This should give the Qualcomm chip the ability to hit higher clocks and integrate more cores or additional systems (graphics, AI). I do worry that going outside of the TDP range we are used to on Qualcomm mobile processors might lead to an efficiency drop, taking away the extended battery life advantage that its Windows 10 PCs offer over Intel today. Hopefully the Qualcomm product teams and engineers understand how pivotal that is for its platform’s success and maintain it.

Early money is on the SD 1000 being based on a customized version of the Arm Cortex-A76 core announced at the end of May. Arm made very bold claims to go along with this release, including “laptop-class performance” without losing the efficiency advantages that have given Arm its distinction throughout the mobile space. If Arm, and by extension Qualcomm, can develop a core that is in within 10% of the IPC (instructions per clock) of Skylake, and have the extreme die size advantages that we think they can achieve, the battle for the notebook space is going to be extremely interesting towards the middle of 2019.

Intel is not usually a company I would bet to be flat-footed and slow to respond in a battle like this. But the well documented troubles it finds itself in with the 10nm process technology transition, along with the execution from TSMC on its roadmap to 7nm and EUV, means that Qualcomm will have an opportunity. Qualcomm, and even AMD in the higher-end space, couldn’t have asked for a better combination of events to tackle Intel: a swapping of process technology leadership from Intel to external foundries, along with new CPU and core designs that are effective and efficient, mean we will have a battle in numerous chip markets that we have not had in a decade.

These are merely rumors, but matching up with the release of the Arm Cortex-A76 makes them more substantial – Qualcomm appears to be stepping up its game in the Windows PC space.

New Tech, Corporate Responsibility, and Governance

Over the past six months, technology has had its fair share of bad press. We have had many stories covering social media with fake news, online harassment and users’ tracking, kids and screen addiction, AI stealing our jobs, and robots taking over the world. This past Saturday, however, there was a New York Times’ story about how domestic abusers take advantage of smart home tech that made me think of the challenges that brands, as well as governments, will increasingly face.

You heard me say it before: technology per se is not bad, but humans can certainly use it in ways that can cause harm to themselves or others.

Two Sides of the Same Coin

I have to admit it never occurred to me that smart home technology could be used to inflict more pain on victims of domestic abuse. The Times referred to reports by abuse helplines that saw an increase  over the past year of calls about abusers using smart home tech for surveillance and psychological warfare. Victims mentioned thermostats being controlled to make the house too hot or too cold, doorbells ringing when nobody is at the door, door-locks pin codes being changed preventing access to the home.

Maybe because I am “half full kinda person” I always thought about all the advantages smart home tech brings whether helping monitoring elderly parents or assisting people with disabilities be more independent in their home. Of course, abusers are not new to using technology to track their victims, think about GPS for instance or the use of social media. Even then, I always saw the other side of the coin considering how GPS could not just help me find my phone but find my dog or make sure my child was where she said she was.

According to the National Domestic Violence, Hotline 1 in 4 women (24.3%) and 1 in 7 men (13.8%) aged 18 and older in the United States have been the victim of severe physical violence by an intimate partner in their lifetime. One in 6 women (16.2%) and 1 in 19 men (5.2%) in the United States have experienced stalking victimization at some point during their lifetime in which they felt very fearful or believed that they or someone close to them would be harmed or killed. What makes smart home tech particularly concerning according to the Times is that these new gadgets like cameras, smart speakers, smart locks seem to be used to abuse women in particular. This is because, by and large, men control technology purchases in the home as well as their set up and any service linked to them.

Educating and Assisting

I find that blaming a male-driven Silicon Valley for designing products that might be used to hurt women to be out of place. It is true that, quite often, tech products are designed by men for men, but this does not mean they are designed with the detriment of women in mind.

That said, I do believe that tech companies have a responsibility to think through how technology is used and they should warn of how it could be misused. Of course, it is not easy to add to your smart doorbell instructions manual: “Warning, an abusive partner could use the camera to monitor every person who gets to your door or every time you leave the house.” Companies could, however, work with support agencies to help them understand how the technology could become a tool for abuse so they could advize vulnerable people and teach about it at prevention workshops as well as be prepared with practical steps to be used for a safety planning.

Staying a Step Ahead

Aside from helping prevention and assisting victims, I feel that there is a significant need from the legal system to stay a step ahead when it comes to technology across the board, and the case of using tech for domestic abuse is no different.

The criminal justice system intervention in domestic abuse took over twenty years to get where we are today.  And it is far from being perfect! In the early 1970s, the law required the police to either witness a misdemeanor assault or to obtain a warrant to arrest. Only in the late 1970s warrantless probably cause arrest laws passed. In the late 1980s, after the Minneapolis Domestic Violence Experiment was published showing that arrest was the most effective way to reduce re-offending many US police departments adopted a mandatory arrest policy for spousal violence cases with probable cause. When it comes to domestic abuse, it seems however that the first judgment call on whether to proceed with an arrest is whether or not there are visible and recent marks of violence.

Psychological abuse is much harder to prove, and the process puts a huge burden on the victim who, in most cases, is reticent to come forward in the first place. This is what is concerning about how tech can play a role in domestic abuse, and gaslighting in particular.

I am no criminal law expert, but it seems to me that the legal system should not only be educated about how technology can be used to victimize but also, and maybe most importantly, how the same technology can provide information to back up the victim’s story.

We always walk a fine line between civil liberties and policing, but recent history has proven that rushed decisions made in response to an incident as rarely the best. Technology is on the path to get more sophisticated and smarter, at times beyond human comprehension, and sadly the world in recent years has shown that there are plenty of people looking to exploit tech for evil purposes. Hoping for the best is no longer an option.

Thoughts on Bad Blood by John Carreyrou

The new bestseller about Theranos, Bad Blood by John Carreyrou, is a must-read for anyone in the tech world, particularly those in Silicon Valley. Not only is it disturbing in its own right, but it’s a reflection on Silicon Valley and not in a positive way. How could its founder, Elizabeth Holmes, get away with so much in the middle of the most technology-aware community in the world?

Holmes convinced most everyone she came in contact with that she invented and perfected a revolutionary blood tester that would obsolete the competition. She did it without ever having to validate her technology. When she finally did ship her product, it was nothing more than a competitor’s tester modified without the required FDA approvals, a product that never worked. Her fraud was so effective that she raised more than a billion dollars and convinced Walgreens to put them in their stores, exposing their customers to faulty test results and putting lives in danger.

What’s given less attention is how Silicon Valley failed to detect the fraud and gave Holmes her legitimacy. You might excuse some of her board members with no tech or governing experience, but you can’t excuse the professional investors and venture capitalists. You can’t excuse the well-known attorney David Boies and his law firm that are described as behaving like thugs with their attacks and threats on those trying to reveal the truth. And you can’t excuse the Stanford professors that are supposed to discern truth from fiction.

The people she convinced are a who’s who of cabinet secretaries, professors, investors, politicians, and business people. None of them ever insisted on doing a blood test to compare with a standard test before participating. None of them ever insisted on seeing an FDA approval. None of them ever insisted on an engineering assessment. None of them ever insisted on anything to confirm her claims. They believed it was true because she said so and because it was Silicon Valley.

Because it was Silicon Valley, major publications put her on their front covers and elevated her to the level of Steve Jobs. Because it was Silicon Valley, politicians flocked to meet her and take selfies. They all believed it was true because it was Silicon Valley.

Yes, Silicon Valley is sometimes known for faking it until making it get the promise out in front of the solution. But that’s normally an interim step done to raise money, and the outcome is either the promised breakthrough or a failed product that never gets released. It’s unheard of when a company ships a product that doesn’t work and that puts lives in danger.

Some employees and some in the tech community were skeptical about Holmes’ claims, especially when Walgreens made an investment but was never allowed to see the product in use. But the company’s protective bubble of lawyers, PR firms, promoters and VCs drowned them out for much too long. After all of this and the recent criminal indictments, a few Silicon Valley VCs still think she was wronged and blame her downfall on Carreyrou.

For those that take pride in Silicon Valley’s contribution to the world, the Theranos story is a black mark on the community and hopefully an aberration.

PTC Demonstrates Augmented Reality’s Real-World Value

This week I attended PTC’s LiveWorx18 conference in Boston, where the company demonstrated some of the ways its customers are leveraging AR technology today. PTC is an interesting company because it has a wide range of solutions beyond AR, and it has done a good job of telling a story that shows how industry verticals can utilize its Internet of Things (IoT) technology as well as its Computer Aided Design (CAD) products to drive next-generation AR experiences.

Vuforia-Branded AR
Back in 2015, PTC purchased the Vuforia business from Qualcomm. Vuforia is a mobile vision platform that uses a device’s camera to give apps the ability to see the real world. It was among the first software developer kits (SDKs) to enable augmented reality on a wide range of mobile devices, long before Apple launched ARKit or Google launched ARCore (today Vuforia works with both of those platforms). Today developers can use it to create AR apps for Android, iOS, and UWP. As a result, there are tens of thousands of Vuforia-based apps in the real world.

In addition to the Vuforia Engine, PTC also has software called Vuforia Studio (formerly ThinkWorx Studio) that lets use create AR experiences such as training instructions using existing CAD assets using a simple drag-and-drop interface (I’ve watched PTC executives create new AR experiences on stage during events using this software). Vuforia View (formerly ThingWorx View) is a universal browser that lets users consume that Studio-created content. And Vuforia Chalk is the company’s purpose-built remote assistance app that enables an expert to communicate and annotate with an on-site technician through an AR interface. Most companies today are utilizing PTC-based technology through mobile devices such as tablets and smartphones already present in the enterprise. But a growing number are testing on headsets from partners including Microsoft, RealWear, and Vuzix.

In addition to these shipping products, the company recently acquired new technology that it will deliver in future products that enable the creation of step-by-step AR experiences by a person wearing an AR headset (Waypoint) and to later edit that experience for consumption (Reality Editor). Training is one of the key use cases for AR across a wide range of industry verticals, and this type of software will make it much easier for companies to streamline knowledge transfer between experienced workers and new hires.

IoT Plus AR
I’ve long suggested that one of the powerful things about AR is that it has the potential to let us humans see into the Internet of Things. PTC demonstrated this ability during its keynote. It also showed a very cool example of moving a digitally created control switch from an AR interface to a physical world control panel (in this case, the notebook screen of an IoT-connected machine). The company also created a real, working manufacturing line on the expo floor that demonstrated the integration of IoT, AR, and robots.

There are plenty of companies doing good work in AR today, but one of the things that make PTC stand out is the fact that its software is straightforward to use, it helps companies leverage many of the digital assets it already has, and it promises to help them make sense of data generated by the IoT.

I attended several of the working sessions during the show, including one on connecting AR to business value. PTC isn’t just talking the talk: During that session, the presenter gave real-world advice to IT decision makers trying to utilize AR in areas such as service, sales, and manufacturing.

The Future Requires Partners
One of the things I like about PTC and its CEO Jim Heppelmann is that the company is confident in its product line but humble enough to know that partnerships are key to building out new technologies such as IoT and AR. In the weeks leading up the show, and on the keynote stage, the company announced strategic partnerships with companies including Rockwell Automation, ANSYS, and Elysium. And earlier this year it announced a key partnership with Microsoft (PTC even had Alex Kipman, Microsoft Technical Fellow, present the day-two keynote).

As a software company, PTC depends upon hardware partners to bring the next-generation of hardware to market. It knows that AR on mobile devices is powerful, but AR on a headset is game-changing for workers who need to use their hands to get work done. Like me, executives at PTC are eager–and a bit impatient–to see new hardware from companies such as Microsoft, Magic Leap, and others ship into the market. This hardware is going to be key to moving AR forward in the enterprise. I look forward to seeing what PTC and its partners can do with it once it finally happens.

Can Dell capitalize on our VR/AR future?

The market for current generation VR technology is in an interesting place. Many in the field (including analysts like myself) looked at the state of VR in 2015/2016 and thought that the rise and advance of sales, adoption, software support, and vendor integration would be significantly higher than what we have actually witnessed. Though the HTC Vive and Oculus Rift on the PC, as well as Gear VR from Samsung and various VR platforms from Qualcomm do provide excellent experiences in price ranges from $200 to $2000, the curve of adoption just hasn’t been as steep as many had predicted.

That said, most that follow the innovation developments in VR and AR (augmented reality) clearly see that the technology still has an important future for consumer, commercial, and enterprise applications. Let’s be real: VR isn’t going away and we are not going to see a regression of the tech that plagued previous virtual reality market attempts. Growth might be slower, and AR could be the inflection point that truly drives adoption, but everyone should be prepared to consume content and interact through this medium.

There is no shortage of players in the VR/AR market, all attempting to leave their mark on the community. From hardware designs to software to distribution platforms and even tools development, there are a lot of avenues for companies looking to invest in VR to do so. But one company that potentially could have a more significant impact on VR, should it choose to make the investment of budget and time, is Dell. It may not be the obvious leader for a market space like this, but there is an opportunity for Dell to leverage its capabilities and experience to get in on the ground level of disruptive VR technology. There is more Dell can do that simply re-brand and resell what Microsoft has determined its direction is for VR.

Here are my reasons on why that is the case:

  • The combined market advantage: Dell has the ability to address both commercial and consumer VR usage scenarios through its expansive channel and support systems. In recent months I have seen the interest in commercial applications of VR speed up more than end-user applications as the market sees what VR and AR can do for workflows, medical applications, design, customer walk-throughs and more. Very few PC and hardware companies have the infrastructure in place that Dell does to be able to hit both sides.
  • Display expertise: Though a lot of different technology goes into VR and AR headsets (today and in the future), the most prominent feature is the display. Resolution, refresh rate, response time, color quality, and physical design are key parts of make a comfortable headset usable for long sessions. Dell is known for its industry-leading displays for PCs, and though the company isn’t manufacturing the panels, it has the team in place to procure and scrutinize display technology, ensuring that its products would have among the best experiences available for VR.
  • Hardware flexibility: Because Dell has the ability to offer silicon solutions from any provider, including Intel, AMD, Qualcomm, or anyone else that provides a new solution, it is not tied to any particular reference design or segment. While Intel has backed out of VR designs (at least for now) and Qualcomm is working with partners like Lenovo and Oculus as they modify mobile-based reference designs for untethered VR headsets, Dell would be able to offer a full portfolio of solutions. Need a tethered, ultra-performance solution for CAD development or gaming? Dell has the PCs to go along with it. Need a mobile headset for on-site validation or budget consumer models? It could provide its own solution built with Qualcomm Snapdragon silicon or innovate with a mobile configuration of AMD APUs.
  • The Alienware advantage: One of the most interesting and prominent uses for VR and AR is gaming, and Dell has one of the leading brands in PC gaming, Alienware. By utilizing the mindshare and partnerships that division lead Frank Azor has fostered since being acquired by Dell, the Alienware brand could position itself as the leader in virtual reality gaming.
  • Being private offers stability: Though there are rumors that Dell wants to transition back to a public company again, being privately owned gives leadership more flexibility to try new things and enter new markets without the shareholders breathing down your neck in a public fashion if you aren’t hitting your margin or revenue goals on a quarterly basis. Because VR and AR is a growing field, with a lot of questions circling around it, the need to “push for revenue” on day one can haunt a public company interested in VR. Dell would not have that pressure, giving its design and engineering teams time to develop and to be creative.
  • A strong design team: Despite being known as “just” a PC vendor to most of the market, Dell has a significant in-house design team that focuses on user interface, hardware design, and how technology interacts with people. I have seen much of this first hand in meetings with Dell showcasing its plans for the future of computing; it develops and creates much more than most would think. Applying this team to VR and AR designs, including hardware and software interaction, could create compelling designs.

There is no clear answer or path to the future of virtual or augmented reality. It is what makes the segment simultaneously so exciting and frightening for those of us watching it all unfold, and for the companies that invest in it. There are and will remain many players in the field, and everyone from Facebook to Qualcomm will have some say in what the future of interactive computing looks like. The question is, will Dell be a part of that story too?

The VR App Stores’ Rocky Road

Over the past week, I have been playing with the Mirage Solo with Daydream, the standalone VR headset recently released by Lenovo. The headset uses the Daydream VR platform that has been available to the Daydream View Headset since 2017. The key difference with the Mirage Solo, as the name gives away, is that you no longer require a phone to experience VR. The Mirage Solo also does not need a PC like the HTC Vive or Oculus. It is, in fact, a direct competitor to the Oculus Go but uses a new technology called WorldSense that allows to track the world around you, or at least a good square meter or so of it.

Overall I felt that the Mirage Solo delivers a decent experience and I very much appreciate not having to worry about the phone overheating or running out of battery. I also felt the freedom from cables was a welcome improvement to my Oculus experience even though it did not take much moving around before WorldSense would ask to re-center the device. The peace of mind from walking around without worrying about tripping and the instant-on of wearing the headset and starting to enjoy content right away was a good start for me.

The Content and Devices Causality Dilemma

Content is where the Mirage Solo shows its weakness. The good news is that out of the box the Mirage Solo has access to all the Daydream apps that are available in the Google Play Store and the YouTube content. The bad news is that the Daydream apps are all there is.

The content is not bad, but it is limited. Some of it really does a disservice to the Mirage Solo as it lacks the quality someone investing $400 in the device would like to see. And this is the issue. Creating good quality content for VR is not cheap, and developers might be reticent to invest in doing so while the addressable market is limited and understandably so. Good quality content comes at a price, with apps that cost as much as $19.99. As users might first try free or cheaper content, the lack of quality might put them off spending more. I find this to be a problem for the Play Store in particular, as consumers have been historically spending less money, relying on free apps more than in the iOS App Store. Delivering ad-funded apps in VR might also be more complex if you want to keep true to the content or extremely annoying if you are not!

Lenovo smartly launched the Mirage Camera with Daydream so that users can create their immersive content by shooting videos that they can then enjoy with the Mirage Solo. That $300 price tag, however, might mostly appeal to early adopters.

While AR has similar issues with lack of compelling apps, users are not investing extra money in a device to try AR in the same way VR users do. It seems that the interim step of screen-less viewers is coming to an end and the industry wants to move towards standalone headsets for the mass market which makes content availability even more critical.

A Different Set of Rules

As I was trying different apps, I was also left wanting a different in-store purchase experience. With traditional apps, looking at the screenshots and reading the reviews is usually enough to get a sense of how good an app will be. I found that with VR there are way more variables at play.

The target audience age is the first thing you see when looking at purchasing an app, which is pretty straightforward. After that, you are given a sense of how much motion You will experience, which should be an indication of how sick you might feel if you do suffer from motion sickness. I do, and I found that the guidance was a bit of a hit and miss. Aside from those couple of points,  you really do not get a sense for how immersive the app will be both from a realistic perspective and an engagement one.

It seems to me that free trials are a must in a VR app store. Apple introduced the ability for developers to offer free trials for subscriptions apps in 2017, after resisting the idea for quite some time. This would work best for entertainment apps but not necessarily for all VR apps. The shift in spending from new apps to in-app purchase we have seen over the past couple of years within traditional app stores comes from many developers offering a free app and then opening up levels or features at a price. I am not sure this technique would necessarily work with VR where maybe a time-based approach might be preferable. You get ten minutes of the full experience before you are asked to pay for the app. Of course, developers can still open up levels and sell cheats but a watered down free version of the app might just not be compelling enough to get consumers to want more.

I also wonder if subscription services, similar to Xbox Live Gold, might be a good idea for power users, especially at this stage of market adoption when you want users to experience as much as possible and start evangelizing. Of course, big titles will build on the success of their traditional apps and might not need further help to reach success. Yet, I am hoping VR will open up the market to new titles and different experiences.

 

Overall I see the addressable market for VR coming from a blend of traditional gaming and mobile content consumption which spans from games to video to educational and productivity apps. The more opportunities to try good quality content mainstream users will have the more rapid the adoption will be as with VR trying is indeed believing.

My Attempt to Switch From Mac to Windows

I recently wrote about my frustrations with my MacBook keyboard due, in my opinion, to Apple’s obsession with thinness. I found my MacBook keyboard to be just too difficult to use and unreliable, as well. Even after a replacement, random keys continue to become mushy and don’t reliably register. In speaking with friends using recent Macs I hear much the same issue.

For the first time in twenty years, it got me to consider moving to a Windows 10 notebook. I never expected that to happen, because I think the MacOS is elegant, easy to use and visually appealing. It also works well with the iPhone I use. The tipping point came with my spending 2 to 3 hours a day at the keyboard working on a new book. But when I casually looked at what alternatives were available, I was surprised by the latest crop of Windows notebooks.

Costco and the local Microsoft Store had computers from Lenovo, Dell, Microsoft and HP that were beautiful, lightweight, with none of the compromises found on the MacBooks. I had been under the impression that thin and light meant limited ports and a shorter battery life, but that’s not what I discovered.

I eventually picked a Lenovo Carbon X1 with its best quality 14-inch, 2560 x 1440 non-touch glossy screen. It’s spectacular – almost OLED like sharp, and intensely bright. The X! also had a full complement of ports, a memory card slot, and that terrific keyboard.

My biggest reservation in switching notebooks was moving from the MacOS to the Windows 10 operating system. It’s taken me almost two weeks to become comfortable doing most things under Windows, including a visit to the local Microsoft Store for a short class. Clearly, Microsoft is remiss by not offering the migration tools that Google and Samsung do to help iPhone users move to Android.

Switching means abandoning some of the apps that I’ve grown accustomed to on the Mac, such as Mail, Fantastical. Grab, and Contacts. I tried using Outlook for Windows, but in spite of watching YouTube videos from third parties and calls to Microsoft, I’ve not gotten it to work reliably.

I was able to access my Apple iCloud web client and its online apps, but they’re not very robust for frequent use. Fortunately, Apple offers a Windows app to access my iCloud drive, so my documents and photos were readily available. Office for Windows seems slightly better than the Mac version. I decided to use Google’s online calendar, contacts, and email clients. They’ve all improved over time, particularly the new email interface. But you’re still limited to Gmail accounts and I wasn’t able to add my Apple email account.

I found Windows 10 to be much improved compared to the last time I tried it using Windows 8. There are still vestiges of the old version with the large tiles that seem unnecessary and redundant, and there are hidden settings that take some searching to find, such as the Control Panel. But Windows OS also has much-improved aesthetics with a clean, clear interface with many intuitive features. The large Cortana search window provides a powerful search for help on the computer and the web.

I still prefer MacOS, which I’d rate a 90 vs an 80 for Windows, using my arbitrary wine rating scale. The Windows computer hardware, however, beats Apple by a larger margin, 95 vs 70. If I were an Apple MacOS software engineer, I’d be unhappy that my fellow hardware engineers are shortchanging the software by offering products that are well behind the competition. There’s no doubt in my mind that Apple has lost its edge with its latest line of notebook computers and is way behind the Windows offerings. I’m likely not telling them anything they really don’t know. Last time I was at the Apple Store to repair my keyboard, they suggested I’d be better off with a MacBook Air.

Why I am Willing to Give Apple my Health Data

One of the challenges of life, regardless of who you are, is the quest to remain healthy. I admit that in my youth, this was not on the top of my list of things to be concerned with. Even into my thirties, I pretty much lived a life of excess and worked way too many hours and traveled for work without any restrictions on my schedule.

At the age of 35, during an annual physical, I showed signs of high blood pressure and minor heart arrhythmia and was told I needed to change my lifestyle. I was also put on a mild BP drug. As I left that doctor visit I was a bit shocked at this news. I was young and felt invincible. But as I aged, and admittedly, I did not change my lifestyle that much given the warnings I received at age 35, my blood pressure issues got worse, my heart problems accelerated and by age 48, I was diagnosed with Type 2 diabetes. At age 62. I had a heart attack and underwent a triple bypass.

From a genetics standpoint, both my mother and father had blood pressure and heart problems and were pre-diabetic in their later years. However, as we now know, genetics only plays a portion of our health destiny while what we eat, our lifestyles and environmental issues have a real impact on our actual health outcomes at any stage of our lives.

While I was growing up, we had very few tools that could help us monitor our health outside of simple things like scales, blood pressure cuffs we could use at home and simple thermometers to read our temperatures. But these days we have home blood testing kits to check for various maladies. We have services that give us our DNA that includes all types of data about potential health problems that may lie ahead. We also have smartwatches and fitness bands that monitor our steps, heart rate and other activities that are then sent to apps like Apple’s Health app that gives us daily readings about various health data points. I even use the Dexcom G6 Continuous Blood Glucose monitoring system that gives me my blood sugar readings 24 hours a day, which I can see at glance on my Apple Watch.

One of the things that these new tech tools for health monitoring has done is given people of all ages many ways to self-check their health and monitor their overall health conditions. I am encouraged that even young people in their teens are using these health monitoring apps and using them early on to try and stay healthy. I am even seeing senior citizens using things like the Apple Watch and fitness bands, although we need to see more of them using these tools in the future as this generation is still a bit tech challenged.

There are many companies in tech that are creating all types of products to keep us healthy and monitor our overall health conditions. However, Apple has taken major leadership role in terms of their aggressive approach of using the iPhone and Apple Watch to monitor and collect health data. More importantly, they have created a set of tools that anonymously send that data to various health researchers, so they can use that data to create better treatments and medications to combat various diseases such as multiple sclerosis, heart disease, concussions, melanoma, Postpartum depression and sleep health for starters.

These tools are HealthKit and ResearchKit.

These tools have three objectives-

  1. Making medial research easier so understanding disease is simpler.

  2. To get more participants into the study so that researchers get more data, which leads to more meaningful results.

  3. Taking research out of the lab and into the real world.

Apple also has another important tool called CareKit, that is a software framework that allows developers to build medically focused apps that track and manage medical care.

As a professional market researcher, I understand how important data is to understand various aspects of the tech market I cover. But the kind of data I look for does not deal with life and death issues in a human sense. On the other hand, medical researchers desperately need as much data and information about a particular disease they are researching in order to better understand it and look for ways to treat it and ultimate defeat the disease altogether.

When Apple introduced the heart study last year, I was one of the first to sign up. As a heart patient for life, I clearly want to have the best solutions for dealing with this disease and if my heart data can help deliver better treatment for all, then I am all in. The data I send to Apple is anonymous and private. Consequently, I did not hesitate to participate in this study. In my discussions with others who have diseases that are tracked via Apple products and HealthKit and ResearchKit, they also seem to be very willing to send that data to researchers via Apple, as they too want to see better ways to treat and possibly cure their particular diseases.

Apple’s role in helping people track their health and then get that data to researchers can’t be underestimated. This is a big deal for Apple and more importantly, health researchers and professionals who need as much help as possible as they tackle the various health issues and diseases they study. I see this as being one of Apple’s greatest callings. In last Septembers keynote, Apple CEO Tim Cook stated that “healthcare is big for Apple’s future.”

I had a meeting with the retired CEO of a major health organization a few years back and well before Apple declared their strong commitments to health apps and products. In the meeting, he told me that he had been in talks with Apple about their ways of thinking about future health apps and services. Before he left my office, he made a prediction to me. He said, “Apple will emerge as the major company who will change the face of healthcare.” Given the timing of this meeting, which took place not long after Steve Jobs died, his prediction seems prophetic.

We are still in the early stages of this data impacting current research studies on the various diseases I mentioned above. Because these tools can be applied to all types of health conditions, I expect to see more studies taking advantage of Apple’s various health research tools and apps.

We should all be rooting for Apple to succeed with their health initiatives. Of course, it would be good for their business if they are successful, but it would be a bigger win for mankind if they succeed.

Telecom and Mobile Implications of the AT&T-Time Warner Deal

Yesterday, Judge Leon ruled that AT&T can acquire Time Warner. In this column, I’d like to discuss the broad implications of the deal, and more specifically what it means for the telecom and mobile landscape.

First off, congratulations to AT&T. They stuck to their guns and didn’t agree to any of the initial —  and unreasonable — DOJ terms to sell off piece parts of Time Warner to get the deal through. Hopefully, AT&T will be more successful with Time Warner than was AOL which, ironically now sits in the hands of arch-rival Verizon’ unfortunately named Oath.

Some of the benefits of the deal will be felt apparent to consumers within a few months. Expect some additional bennies and content bundles for AT&T wireless subscribers. HBO for free, a la T-Mobile Netflix? In the more medium term, marrying the huge Time Warner ad inventory with the insights on AT&T-DTV’s customers will create value. It will be a longer term project to build a more effective ad targeting platform, pulling together the content, ad inventory, and customer data in an effective – and responsible – manner.

AT&T will have to tread carefully. With the tech industry reeling from myriad episodes of inappropriate exposure/use of customer data, the $200 billion AT&T-Time Warner behemoth, which will still be under greater regulatory scrutiny than its Silicon Valley brethren, will have to be both careful and transparent with regard to how that customer data is leveraged. It will also have to abide by the near promises it made during the trial to not discriminate in the provision of Time Warner content to DTV rivals. That said, the TV and rights fees landscape is in turmoil and under pressure, so needles will have to be threaded here.

Against this backdrop, and with uncanny timing, net neutrality was officially repealed this week, smoothing the way for all of the above to be implemented.

The clarity of the ruling and its lack of conditions will help to unleash a wave of M&A activity in the media and content landscape. Most immediately, the bid for 21st Century Fox assets will heat up, with Comcast entering the fray.

I believe this will also ease the path for the T-Mobile/Sprint deal. Just as the TV market has changed hugely with OTT, streaming, and the impact of Netflix, Amazon, Apple, YouTube and so on, so too has the telecom business. Landline is all but dead, broadband is a near monopoly in 50% of the country, and demand for wireless data (driven by video) and the capex to support it remains near insatiable. It is hard to imagine T-Mobile and Sprint competing successfully, independently, and profitably with AT&T and Verizon, long-term. Especially with DISH’s spectrum, Comcast/Charter MVNOs, and possible entry of some Internet/Web giant into the space, as part of the mix.

I think T-Mobile and Sprint can successfully make the argument that the industry landscape has changed significantly since a deal was first broached a few years ago. The biggest benefit of 5G is capacity – in the form of spectrum breadth and depth, and cell site density. T-Mobile and Sprint will be able to do more together than they would do independently (1+1=3, as it were).

5G will be another beneficiary of this evolving telecom/media landscape. Verizon, AT&T, T-Mobile (Layer 3), and Comcast all have important content and video assets, which in addition to driving traffic growth, will also unleash innovation in apps, games, and so on that will form some of the business cases for 5G, such as in AR and VR. This thinking was on display last week at the AT&T Shape conference, which was held in Los Angeles at  – wait for it –  the Time Warner Studios lot (see my column on that here).

I also think that Verizon, Comcast, and AT&T getting more deeply into content and media will incent some of the major internet players, namely Google, Facebook, Amazon, Apple, and Netflix to be more masters of their own domain with regard to telecom and mobile. At the very least, it will drive the development of edge networking (and hence small cells/data centers) and 5G. One could also envision a deal for DISH’s spectrum, their participation in future spectrum auctions, leveraging Wi-Fi/unlicensed/3.5 GHz spectrum, or some level of MVNO relationship — or some hybrid of all of the above.

The telecom landscape will look less homogeneous going forward. Mobile-centric AT&T looks more like broadband-centric Comcast than it does Verizon. Verizon, with its leadership in 5G, emphasis on 5G FWA, and appointment of former Ericsson CEO Hans Vestberg as its next CEO, has taken a turn toward re-emphasizing the network. It is still in the early stages of truly leveraging its Oath asset, though if it is going to be a serious player in media/content/advertising, there’s more dealing to be done. T-Mobile and Sprint together look the most like a wireless pure play, though I could certainly see how Sprint’s 2.5 GHz spectrum could be leveraged as a potential competitor to broadband in some markets. And as part of the likely M&A acceleration in the telecom/media arena likely over the next year, one can’t imagine how DISH’s spectrum can lie fallow for much longer.

News You Might Have Missed: Week of June 15, 2018

Office 365 Gets a Redesign

This week Microsoft announced that they will introduce a series of changed to Office.com and Office 365. The changes are built on a lot of users’ feedback and aim to focus on simplicity and context.

The initial set of updates includes three changes:

Simplified ribbon– An updated version of the ribbon designed to help users focus on their work and collaborate. People who prefer to dedicate more screen space to the commands will still be able to expand the ribbon to the classic three-line view.

New colors and icons—Across the apps you’ll start to see new colors and new icons built as scalable graphics—so they render with crisp, clean lines on screens of any size. These changes are designed to both modernize the user experience and make it more inclusive and accessible.

Search—Search will become a much more important element of the user experience, providing access to commands, content, and people. With “zero query search,” simply placing your cursor in the search box will bring up recommendations powered by AI and the Microsoft Graph.

Via Microsoft 

  • I use Office 365 every day across different devices and operating systems and I was delighted to see these changes yet disappointed I will have to wait months before I can use the new tools. This was a classic case where I had to remind myself that not every user is like me.
  • There are over one billion Office users across the world who are very reliant on it for their business. Any change, albeit small, might be perceived as a disruption of someone’s workflow.
  • Rolling out the changes to selected users to gather feedback and make adjustments seems like a sensible move. Yet, Microsoft has to balance what very pragmatic users might feel comfortable with and what millennials, who tend to be more open to change, and have likely grown up with a mix of productivity applications, might be looking for.
  • I like Microsoft’s approach to getting rid of the clutter but I like even more the idea of using AI to see what features a specific user might be needed in a particular context.
  • This is what I think Apple got right with the Touchbar on the MacBook Pro, a tool that for me came alive when Office rolled out its support for it.
  • I do wonder if Microsoft will be able to nudge his more pragmatic users forward through AI by serving up suggestions that will help the adoption of over services such as OneDrive, Cortana, and Teams. I hope Microsoft will at least try to do so as the return would be considerable.

Google 2018 Diversity Report

Women make up 30.9% of our global workforce, and men 69.1%2. In terms of race and ethnicity (U.S. data only) 2.5% of Google’s workforce is Black; 3.6% is Hispanic/Latinx3; 36.3% is Asian; 4.2% is multiracial (two or more races); 0.3% are Native American4, Alaska Native, Native Hawaiian or Pacific Islander; and, 53.1% is White. Representation for women, Black, and Latinx Googlers is similar to last year, increasing by only 0.1 percentage point (ppt) for each of these groups.

In the U.S. in 2017, leadership hires were 5.4% Black, and Black representation in leadership increased from 1.5% in 2017 to 2% in 2018. Latinx representation in Google’s leadership is 1.8% (up from 1.7%).

Via Google  

  • While Google highlights the improvements year over year, it is hard to see past the sad picture this report still paints. And how well it fits into the rest of tech.
  • You just need to consider those leadership numbers in a half full kind of way to see how much work remains to be done: the company’s higher ranks are still 74.5% male and 66.9% white.
  • Google said to have set a goal to reach or exceed the available talent pool. A statement that really says very little as to what Google wants to achieve. The current talent pool when it comes to minorities and colleges, for instance, is quite limited. When it comes to minorities talent is plentiful but often you must look outside the usual sources. Community colleges, for instance, are a great source of talent but very few tech recruiters would look there. The same can be said about geographical provenience as talent is often looked for within Silicon Valley and the big cities first. Referral programs are also a technique that does not favor minorities as, put simply, white refers white and hires white.
  • It is interesting to me that rather than highlight new hiring strategies Google focused on talking about how they are investing in improving diversity of early pipeline talent. This year, for instance, their internship program welcomes the largest cohort of Black, Latinx and/or women: 49%. In 2017, they launched a computer science residency program that attracts top software engineering students from the Black community directly to Google. Finally, Google also offers a three-week computer science course for graduating high school seniors through Google’s Computer Science Summer Institute.
  • Google has also provided attrition rates for the first time. These are numbers that show the people leaving the company. The numbers are showing that Google is doing well at retaining women as they are leaving the company less than men. The story is a little different for Black and Latinx who had the highest attrition rate in 2017.
  • Attrition numbers best show how tech does not only have a problem of diversity but inclusion as well. It is hard to feel included in a company where so few people look like you especially across management.
  • I always thought the hiring of Danielle Brown from Intel as Google Chief Diversity & Inclusion Officer was a curious one. If you’d ask me to point at a company in tech that is doing good when it comes to diversity and inclusion Intel would certainly not be top of mind. That said, I hope Brown can do more at Google than she did at Intel.

AMD Could Grab 15% of the Server Market, says Intel

Before the launch of its Zen-architecture processors, AMD had fallen to basically zero percent market share in the server and data center space. At its peak, AMD held 25% of the market with the Opteron family, but limited improvement in performance and features slowly dragged the brand down and Intel took over the segment, providing valuable margin and revenue.

As I have written many times, the new EPYC family of chips has the capability to take back market share from Intel in the server space with its combination of performance and price-aggressive sales. AMD internally has been targeting a 5% share goal of this segment, worth at least $1B of the total $20B market size.

However, it appears that AMD might be underselling its own potential, and Intel’s CEO agrees.

In a new update from analyst firm Instinet, the group met and spoke directly with Intel CEO Brian Krzanich and found that Intel sees the future being brighter for AMD in the data center. Krzanich bluntly stated that Intel would lose server share to AMD in 2018, which is an easy statement to back up. Going from near-zero share to any measurable sales will mean fewer parts sold by Intel.

Clearly AMD is not holding back on marketing for EPYC.

In the discussion, Krzanich stated that “it was Intel’s job to not let AMD capture 15-20% market share.” If Intel is preparing for a market where AMD is able to jump to that level of sales and server deployment then the future for both companies could see drastic shifts. If AMD is able to capture 15% of data center processor sales that would equate to $3B in revenue migrating from incumbent to the challenger. By no measurement is this merely a footnote.

For months I have been writing that AMD products and roadmaps, along with the impressive execution the teams have provided, would turn into long-term advantages for the company. AMD knows that it cannot compete in every portion of the data center market with the EPYC chip family as it exists today, but where it does offer performance advantages or equivalency, AMD was smart enough to be aggressive with pricing and marketing, essentially forcing major customers, from Microsoft to Tencent, to test and deploy hardware.

Apparently Intel feels similarly.

Other details in the commentary from Instinet shows the amount of strain Intel’s slowing production roadmap is causing product development. Intel recently announced during an earnings call that its 10nm process technology that would allow it to produce smaller, faster, more power efficient chips was delayed until 2019.

Krzanich claims that customers do not care about the specifics of how the chips are made, only that performance and features improve year to year. Intel plans updates to its current process technology for additional tweaking of designs, but the longer Intel takes to improve manufacturing by a significant amount, the more time rivals AMD and NVIDIA will be able to utilize third party advantages to improve market positions.

What I Learned from the Women in Technology Summit

This week I spent a couple of day at the Women in Technology Summit hosted by WITI. I was invited to moderate two panels and rather than just going in for those I decided to invest some time to listen to what other speakers had to say, to attend a workshop on how better to communicate with men and build allies and to network. Over the years, I have attended a few women in tech luncheons and breakfasts at broader industry events, but I usually shy away from women networking events marketed explicitly at women. This is mostly because I prefer to fight my way into events where the majority of attendees are men as this is, after all, what best reflects my day to day in tech. That said, I think there is power in conversations that happen in an environment where you feel it is safe to be open and this is precisely what the WITI Summit offered. There is power in sharing stories, opinions, openly talk about the challenges we face without being concerned of being judged and with the reassurance that more often than not, the person you are talking to is able to relate to what you are saying.

There are Many Smart Women in Tech

You often hear men complain about a shortage of women in tech. Not enough women to keynote at CES, not enough women in tech to follow on Twitter, not enough women in tech to invite as guests on their podcast. Time and time I see women making extensive lists of the talent that is out there if you are willing to look. And by look I mean, taking a quick look at these women are not hiding under a rock but they are openly visible doing their thing and demonstrating their awesomeness.

In case you are tempted to believe this shortage nonsense, let me tell you that at the WITI Summit there were over 100, yes one hundred, speakers, panelists, coaches and guess what, they were all women. I can bet that the organizers did not have to send out search parties to hunt them down either. What struck me was the quality of women on stage. They knew what they were talking about, many had science and engineering background, they were engaged with the audience, they were generous with their knowledge and time, and they genuinely wanted to make a difference.

Something that really struck me in listening to the speakers that the vast majority of them did not just tell a story and spoke hypothetically about a topic, whether the topic was a new technology like AI or the issue of diversity and inclusion in tech. They were prepared on the topic, talked with purpose and always left the audience something to reflect on. All by rarely mentioning their own personal achievements other than to make a point.

There was a Lack of Young White Women in the Conversation

As I was looking at the crowd in the sessions, I started to notice that the mix looked a little different from what I see at other tech events. Coming fresh from the round of developer events over the past couple of months and being used to see young white women making up a significant proportion of the female mix, I was stunned to find a considerable lack of millennial white women at the summit. There were many millennial minority women in the audience, but it was hard to see young white women.

I am aware that millennials are the group where minorities are becoming the numerical majority, but I think there might be something else going on there. I do wonder if, young white women share my feeling that we should find our place in industry events and not at events that are focused on women only. Maybe young white women are in general more comfortable when it comes to their place in tech thanks partly to the effort of those who came before them.

I hate to think that young white women do not want to be part of the conversation about diversity and inclusion. As a matter of fact, I find it hard to believe that is the case. I do wonder, however, if they might not think there is something to be learned from women who were the first in their company to become the CEO, or a lead engineering or product manager. Of course, the bigger point is that whether or not they think they can learn or they can benefit further from being part of the conversation is somewhat irrelevant. What I do hope, is that young white women understand they, like me, have a responsibility help and support other women and women from ethnic minorities.

The Best Pieces of Advice I heard:

As speakers shared their stories and coaches shared their knowledge, I was listening to find little nuggets of wisdom, and that is precisely what I found:

Ahalya Kethees founder of Lead with Brilliance said: You cannot be truly curious about someone if you are judgmental. I never thought about it this way, but it is true that if you are judging someone, it is hard to keep an open mind and wanting to know more about what they are talking about or who they are.

VP of Engineering at Autodesk, Minette Norman said, “ stay true to yourself, don’t try and be one of the boys.” I can really relate to this. I tried to fit in by being like one of the boys, but it just was not for me because it was not me. Over the years I found that being me, with my faults and quirks was the most effective approach to build a relationship with clients as well as colleagues.

Several of the speakers urged the audience to go and get a career coach. And apparently, according to a survey run by IDC across WITI members, a male coach would help you get a higher salary more so than a female coach would! Not a surprise when you think that women generally are not good at negotiating their contracts and assessing their worth!

Barbara Nelson GM & VP at Western Digital said: “Fight your own battle.” Yes, we need sponsors, and advocates, and allies but we need to be prepared to speak up, ask the hard questions and fight our own battles.

Lastly, I leave you with my action point: amplify women’s voices. Highlight when one of your female colleagues says or does something smart, retweet and follow other women in tech, stop a male colleague when he interrupts a woman in a meeting, so she gets to finish talking. Let’s not fight among ourselves to get a seat at the table let’s bring in a chair for someone else when we get there!

The Business of Business Software

When most people think about software for business, they tend to think of things like Microsoft Office. After all, Office is the application suite that many of us spend a great deal of time in during our work days.

In reality, however, productivity suites like Office only represent a small portion of the overall market for software used in businesses and other large enterprises. Some of the biggest categories are things like Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and Business Intelligence (BI) and analytics. In addition, there are millions of custom applications (many of which are built with these types of tools as a foundation) that play an extremely important role in the operation of today’s businesses.

While Microsoft is an important player in many of these categories, it’s companies like IBM, SAP, Oracle, and Salesforce that are the leaders in many of these lesser-known segments that are commonly referred to as “back office” operations (a historical phrase that stems from many business organizations having the operational teams doing this work physically located in the rear section of office buildings). In fact, companies like SAP have built large businesses creating the tools and platforms that sit at the central operational point for many organizations in areas ranging from supply-chain management to human resources and other personnel systems.

At last week’s SAPPHIRE NOW, SAP’s annual customer conference, the company announced a major entry into the “front office” CRM market with C/4 HANA. The new offering ties together the technology from a number of different acquisitions it has made to create a suite of applications and cloud services that allows sales and marketing people (who typically sat in the “front” part of office buildings) to organize all the critical information about their customers in a single place. C/4 HANA builds on the company’s existing in-memory HANA database architecture, which stores all data and applications in server memory (versus in storage) to speed overall performance.

What’s interesting about the release is the position it holds in the overall evolution of the enterprise software market. For several decades, companies like SAP were strongly associated with old legacy software that ran only in the physical servers within a company’s data center—or “on premise,” as many like to say. The applications were large, monolithic chunks of code that were so complicated, they almost always required external help from large consulting firms and system integrators, or SIs (such as Accenture, CapGemini, the services division of IBM, etc.), to properly install and deploy.

Over the last decade or so, however, we’ve seen companies like SAP and IBM evolve their software architectures and approaches, in large part because of the dramatic rise of cloud-based software companies such as Salesforce.com. The efficiencies, flexibility, and cost-savings enabled by these internet-based business software companies and the new business models they offered—such as Software as a Service (SaaS), Platform as a Service (PaaS), etc.—forced some dramatic changes from the traditional enterprise software vendors. In particular, we saw a dramatic increase in the use of public cloud platforms, such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud, to host and run applications that traditionally only ran in corporate data centers. In addition, we’ve witnessed the dramatic increase of enterprise mobile applications that provide a means to run or interact with business software on our smartphones and other mobile devices.

The new C/4 HANA release is an intriguing example of these many developments because it is a cloud-first set of tools that companies can now run in the public cloud across any of these major cloud platforms, in their own private cloud within their data center, or in a combined “hybrid” cloud model. Architecturally, the suite incorporates a large number of microservices—a dramatically different and more modular structure than older monolithic applications—that offers much more flexibility in terms of how the software can be leveraged, updated, and enhanced. In particular, the ability to do things like plug-in new enhancements such as AI and machine learning via SAP’s Leonardo suite of new technologies is indicative of the new approach the company is taking with its software offerings.

At this year’s SAPPHIRE NOW, SAP also announced an SDK (software development kit) that will allow native access to all their services from Google’s Android platform for mobile access. This builds on the work that the company had previously done for iOS and Apple devices.

Even with all these enhancements and long-term evolutionary progress, there’s still no question that the bulk of enterprise software offerings can still be extremely complex and difficult to completely decipher. However, it is also clear that tremendous progress is being made and that, in turn, is helping companies who use these tools improve their efficiencies and enhance the digital readiness of their organizations. As the business environment continues to advance, it’s good to see the toolmakers who’ve supported these companies taking the steps necessary to make these digital transformations possible.