Reading Time: 1 minute
This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Microsoft’s new Surface Go mini 2-in-1 device and Apple’s updated MacBook Pros, analyzing the recent the PC market shipment numbers, and talking about the latest version of Microsoft’s chat application Teams.
If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast
Reading Time: 4 minutes
We are all both loving but are at the same time overwhelmed by the era of ‘peak TV’. A similar thing is happening with the evolution of how we watch TV. This is a landscape that is experiencing a glut of offerings, with more coming – and with it the inevitable fragmentation and customer confusion. If I look at the roadmap over the next couple of years, I see this becoming worse before it gets better. The question is, how might it get better, and who will be the winners (hopefully consumers) and losers?
First, how did we get here? For the 50+ crowd, it started with the VCR. For the 40+ crowd, things changed with DVRs and the whole notion of time shifting. But that landscape was still largely owned by the cable companies, who turned this into a new way to charge you an extra 10-15 bucks a month. The next major shift was the penetration of good enough broadband networks and Netflix’s big shift to online – and the bet on content and concomitant rise of other ‘over the top’ (OTT) options, from Hulu to Amazon to YouTube and a whole bunch of other ones. In recognition, and perhaps semi-capitulation, those who were being disrupted — Comcast, DISH, DirecTV — are also now disrupting themselves with a slew of the industry’s worst acronyms since ‘throttling’: ‘skinny bundles’, ‘vMPVDs’, and the like.
How are they doing it, you ask? In part, by stripping out some of the stuff you don’t want to pay for, but then occasionally want…such as live sports. Remember the Oscar-nominated song, “Blame Canada” from South Park? The Pay TV anthem should be “Blame the NFL! Blame ESPN!” for this whole confusing mess. If a neophyte/technophobe asks you the following question at a cocktail party: “I hear I can cut cable. What are the options?” G’luck, you’ll be three drinks into it or they’ll have walked off.
And just when you think this might have sorted this itself out….when the vMPVDs (say that one out loud or try typing it ten times) like DirecTV Now and YouTube TV offer a critical mass of channels and functionality, so you only have to make eensey-weensy levels of compromise instead of God-awful [First World Problem-ish] levels of compromises, along come a whole lot of industry perambulations that are going to create a hot mess all over again. Love Netflix? Say goodbye to Disney and Pixar next year, and say hello to Disney’s own branded streaming service. Thought you had sports figured out or chose the vMPVD that had the most sports content? Say hello to ESPN+ (has anyone said hello to ESPN+?). Then, look at the daily headlines — AT&T-Time Warner (please, please, please don’t mess with HBO)! The battle for Viacom/CBS! The bidding war for 21st Century Fox! What’s Apple Gonna Do? — and the 20% – ish of you who have cut the cord, alongside the rest of you who are understandably deer in the headlights on this one, will have rapidly concluded that whatever choice you made isn’t exactly future-proofed.
And then, on your next trip outside the U.S., try to figure out what you can and can’t watch on your tablet (Netflix, mostly yes! Most other stuff, mostly no!).
Actually, what you might conclude, after skinny bundling it, plus the new (surprise!) unbundled price of broadband, plus Netflix, plus Hulu, plus Amazon, plus HBO, plus Showtime, plus some sports, plus the 3-4 things your particular vMPVD inevitably doesn’t have, plus the three sticks for your “not smart TVs”…end up being more expensive and a whole lot more trouble than if you’d just stuck with cable in the first place. Or, plunge yourself back into the 1970s with an antenna (really, they’re selling like hot cakes) and actually watch This is Us when it’s on and with your family. Or, you could head in the Leave No Trace direction.
Kidding aside, there are some great things happening. Yes, Peak TV with 500+ new scripted shows this year alone, and untold billions being unleashed to create more content. Lots more choice of programming options and bundles. And there are steady improvements in on-screen UIs, programming guides, and even voice integrations such as Alexa to help make all this stuff a bit easier to sort, search for, and figure out.
But the business end of this is going to go through a lot of tumult over the next 3-4 years. First, there are 3-4 major deals involving major media/content companies, with more inevitably coming. Second, there will be landmark battles on rights fees, as the media landscape gets rearranged in this wave of M&A. Prediction: the sports leagues are in for a take down. Third, there’s going to be a shakeout in the whole vMPVD space, with 2-3 emerging as clear winners. How this will all play out, with the move by some properties such as Disney, ESPN, CBS, and so on, to their own direct-to-consumer offerings is anyone’s guess. Parenthetically, I think the direct-to-consumer approach, for all but a few properties, will be a disaster.
The end result, in my view, might be greater emphasis on a la carte, one-of type offerings. Subscribe to FX this month to binge a couple shows, then switch to Showtime next month. Buy a season’s worth of your favorite ball team through MLB, rather than choose the vMPVD that carries your local sports channel. All of this might be made easier by an evolution in the UI, in search, and better integration in the new generation of Smart TVs. This is a great project for the voice-driven assistants, like we’ve started to see with Alexa and Siri (but still has a long way to go). It’s also fun playground for AI, as long as it doesn’t get too creepy on us.
Consumers might save a few bucks along the way, but at the other end, they’ll be a lot more educated on the arcana of rights fees, the industry landscape, and what content is worth paying for. But it’ll be messy along the way. And it will get worse before it gets better.
Reading Time: 5 minutes
AT&T to be the Exclusive US Distributor of the Magic Leap One
This week Magic Leap announced that the Magic Leap One Creator Edition will be available later in the summer.
When available for consumers, a timeframe that was not specified, AT&T customers will be among the first to experience it in select AT&T stores in Atlanta, Boston, Chicago, Los Angeles, and San Francisco, with more markets to follow.
In a developer-focused Twitch stream, several Magic Leap employees offered details about the system specs for the AR headset. The headset will be powered by an Nvidia Tegra X2 system, probably one of the more powerful options for mobile devices, though it is bulky enough that the company needed to build a dedicated hip pack in order to house it.
Reading Time: 4 minutes
The battle over audio in the home may be a more fundamental one than many realize. The iPod kicked off the battle for personal audio. As music transitions to streaming services like Spotify and Apple Music, this battle continues. But as an extension of personal audio, the battle for the home may be as strategically important.
Reading Time: 4 minutes
After four long, long years of development in which many in the outside world (myself included) doubted that the product would ever see the light of day, Magic Leap held an even this week to give some final details on its development kit. First, the Magic Leap One Creator Edition will be shipping “this summer” though nothing more specific was given. Pricing is still unknown, though hints at it being in the realm of a “high end smartphone” point to this as a ~$1,500 item.
For the uninitiated, Magic Leap is the company behind countless videos that appeared to show the most amazing augmented reality demonstrations you can imagine. The demos, some of which claimed to be captured rather the created, were so mind blowing that it was easy to dismiss them as fantasy. Through this series of live streams the Magic Leap team is attempting to demonstrate the capability of the hardware, leaving behind the negative attention.
Magic Leap showcased a demo called Dodge, in which the wearer combines the use of their hand as a pointing and action device with the real world. By looking around the room, a grid was projected on the floor, table, and couch, indicating it recognized the surfaces thanks to the depth camera integration. Using an ungloved hand to point and pinch (replicating a click action), the user is setting locations for a rock monster to that emerges from the surface in a fun animation. It then tosses stones your way, which you can block with your hand and push away, or move to the side, watching it floats harmlessly past – one time even hitting a wall behind you and breaking up.
The demo is a bit jittery and far from perfect, but it proves that the technology is real. And the magic of watching a stone thrown past your head and virtually breaking on a real, physical surface is…awesome.
The other new information released included the hardware powering the One. For the first time outside of the automotive industry, the Tegra X2 from NVIDIA makes its way into a device. The Magic Leap One requires a substantial amount of computing horsepower to both track the world around the user as well as generate realistic enough looking imagery to immerse them. The previous generation Tegra X1 is what powers the Nintendo SHIELD, and the X2 can offer as much as 50% better performance than that.
Inside the TX2 is an SoC with four Arm Cortex-A57 CPU cores and two more powerful NVIDIA-designed Denver2 ARMv8 cores. A Pascal-based GPU complex is included as well with 256 CUDA cores, a small step below a budget discrete graphics card for PCs like the GeForce GT 1030. This is an impressive amount of performance for a device meant to be worn, and with the belt-mounted design that Magic Leap has integrated, we can avoid the discomfort of the heat and battery on our foreheads.
The division of processing is interesting as well. Magic Leap has dedicated half of the CPU cores for developer access (2x Arm A57 and 1x NVIDIA Denver2) while the other half are utilized for system functionality. This helps handle the overhead of monitoring the world-facing sensors and feeding the graphics cards with the data it needs to crunch to generate the AR imagery. No mention of dividing the resources of the 256 Pascal cores was mentioned but there is a lot to go around. It’s a good idea on Magic Leap’s part to ensure that developers are forced to leave enough hardware headroom for system functionality, drastically reducing the chances of frame drops, stutter, etc.
The chip selection for Magic Leap is equally surprising, and not surprising. NVIDIA’s Tegra X2 is easily the most powerful mobile graphics system currently available, though I question the power consumption of the SoC and how that might affect battery life for a mobile device like this. Many had expected a Qualcomm Snapdragon part to be at the heart of the One, both because of the San Diego company’s emphasis on VR/AR and mobile compute, but also because Qualcomm had invested in the small tech firm. At least for now, the performance that NVIDIA can provide overrides all other advantages competing chips might have, and the green-team can chalk up yet another win for its AI/graphics/compute story line.
There is still much to learn about the Magic Leap One, including where and how it will be sold to consumers. This first hardware is targeting developers just as the first waves of VR headsets did from Oculus and HTC; a necessary move to have any hope of creating a software library worthy of the expensive purchase. AT&T announced that it would the “exclusive wireless distributor” for Magic Leap One devices in the US but that is a specific niche that won’t hit much of the total AR user base. As with other VR technologies, this is something that will likely need to be demoed to be believed, so stations at Best Buy and other brick and mortar stores are going to be required.
For now, consider me moved from the “it’s never going to happen” camp to the “I’ll believe it when I try it” one instead. That’s not a HUGE upgrade for Magic Leap, but perhaps the fears of vaporware can finally be abated.
Reading Time: 3 minutes
Trump’s Denuclearization Summit with North Korea ignored a big tech threat affecting the US, South Korea and foreign governments around the world
When President Trump met recently with Kim Jun-Un of N. Korea in Singapore, the topic of denuclearization was the #1 issue on the table. Nuclear Weapons are a huge threat to humanity and could be used in a war to wipe out millions of people in the heat of battle.
Reading Time: 3 minutes
Microsoft just launched the Surface Go, a 10” Surface powered by the 7th Generation Intel Pentium Gold Processor, in a fanless design, offering 9 hours of battery, priced at $399.
Size is always a difficult topic to discuss when it comes to tablets. The balance between how much screen real-estate you need to be productive and how little you want to be mobile is a very tricky one and a very personal one too. I always wanted a smaller Surface to better balance work and play. So far I have been treating Surface Pro more like I would my PC/Mac than my 10″ iPad Pro. Of course device size is only one part of the equation when what you want is balance work and play as apps availability plays a prominent role in reaching that balance.
What cannot be argued are the greater affordability of a sub $500 Surface and the opportunity that it opens to create a broader addressable market in the enterprise, the consumer, and the education segment.
Lots changed since Surface RT
I know there is a temptation to think about Surface Go as a Surface RT’s successor and dismiss it even before it launches, but I ask you not to do so just yet. I say this, not because Surface Go is running on Intel and not ARM but because a lot has changed in the market since 2012 and even since Surface 3.
When Surface RT hit the market, Microsoft was responding to the success the iPad was having in the consumer market as well as the hype around tablets taking over the PC market. Beyond early adopters, however, consumers were still figuring out if there was space in their life between their smartphone and their PC – many are still dueling on that today – and enterprises were trying to understand if employees could actually be productive on a device that was not a laptop and with an operating system, Windows 8, that was not optimized for touch. Moreover, Surface was a new brand for consumers and Microsoft an unproven supplier for the enterprise market.
I don’t need to remind you that Surface RT was a flop and Microsoft went back to the drawing board to bring to market a more affordable Surface to accompany Surface Pro. Fast-forward to 2015 and Surface 3 proved to find its place in some enterprises as IT departments were buying more into the 2in1 trend and felt more comfortable with its Intel architecture.
Surface Go aims at providing an upgrade path for Surface 3 users. It also looks at broadening the Surface reach into the enterprise through first line workers and, more broadly, users who might not need all the horsepower of a Surface Pro but do not want to compromise on the hardware design. Adding Surface Go to the portfolio assuring consistency of experience and fidelity of apps were probably the biggest drivers to sticking with an Intel architecture at a time when Windows-on-ARM is getting off the blocks.
The ‘once bitten twice shy’ Surface team prioritized capitalizing on a small but solid base with a known formula for now and will probably wait for the Snapdragon 1000 to broaden the appeal to users who might be prioritizing mobility and a more modern workflow over legacy. As disappointing as this might be for BYOD users and consumers this was the safest bet to get IT buy-in.
A lot has been written about Surface Go being a reinvigorated effort on Microsoft’s part to go after iPad, and how could it not, given iPad remains, 8 years in, the most successful tablet in the market. While eroding iPad’s market share would be a welcomed bonus, I think there is market share to be had within the Windows ecosystem first.
Considering price and design as the only two ingredients a product must have to tackle the iPad dominance in the tablet market ignores a crucial factor in the iPad’s success: apps. Apps make up a big part of the experience users buy into when using iPad and iPad Pro. This, in my opinion, is still Surface’s weakest link when it comes to broadening its reach into the consumer space.
When we look at the Windows 2in1 market, design still leaves a lot to be desired, especially when it comes to the overall package of screen quality, keyboard, and pen experience. This is especially true of the new Windows on ARM devices which boast excellent battery life and LTE connectivity but do not seem designed with mobility first in mind. While many Windows manufacturers continue to dismiss the success of Surface based on market share it is clear that brand awareness and satisfaction has grown significantly. Throwing Surface Go in the mix of options consumers and students getting ready for back to school have is not a bad thing.
Looking forward to Surface Go 2
I feel Surface Go landed on a design and price point that offer a lot of promise. Thinking ahead to a more mature Windows on ARM, a Qualcomm Snapdragon 1000 and users finally ready to benefit from connectivity anytime anywhere, it is hard to see how Surface Go 2 (or whatever it will be called) would not offer an alternative to the current Intel architecture. And I must say I look forward to that!
Reading Time: 3 minutes
A few months ago, I wrote about Apple’s slight pivoting with iPad. We at Creative Strategies have had the opportunity to study tablets since their early inception when Microsoft introduced the tablet PC in 2001. The categories evolution went mainstream with Apple released the iPad and during that time we studied the rapid adoption of tablets as well as the quick decline and now normalization of tablet market sales.
Reading Time: 4 minutes
While technology has certainly made the world a smaller, more connected place, it’s becoming increasingly clear that it hasn’t made it a more unified place. Even in the realm of technology advancements—which are generally considered to be apolitical and seemingly independent of geographical boundaries—important regional differences are starting to appear. More importantly, I believe these differences are starting to grow, and could end up leading to several distinct geographic-driven paths for technology and product evolution. If this geographical splintering happens, there will be a profound impact on not just the tech industry, but industries of all types around the world.
Having spent the last week in Beijing, China to attend the Create AI developer conference hosted by Baidu (commonly referred to as the Google—or more precisely, the Alphabet—of China), some of these geographic differences started to come into focus. In the realm of autonomous driving, for example, it’s clear that Baidu’s Apollo platform for autonomous cars is targeted at the Chinese domestic market. While that’s perfectly understandable, the distinct character of everything from the slower speeds at which Chinese cars tend to drive, to Beijing’s nearly impassable hutong alleyways are bound to create technology requirements and developments that may not be relevant or applicable for other regions.
In addition to autonomous driving, there’s been an increasing focus by the Chinese government to create more native core tech components, such as unique semiconductor designs, over the next several years. The “Made in China 2025” plan, in particular, has put a great deal of attention on the country’s desire to essentially create an independent tech industry infrastructure and supply chain.
One of the reasons for the appearance of these regional fissures is that technology-based products have become so integrated into all aspects of our society that governmental agencies and regulatory bodies have felt the need to step in and guide their deployment and development. Whenever that happens in different countries around the world, there are bound to be important differences in the directions taken. Just as several hundred years of local cultural norms have driven trade policies, business rules and the evolution of unique societal standards in each country, the local interpretation and guidance of key technology advancements could lead to important variations in future technological culture and standards around the world.
While these regional technology differences might not happen in a truly united world environment, they still could in one that’s merely well connected. In other words, old-world historical and cultural differences between countries or regions could prove to be a much bigger factor in the evolution of technology products than many have previously considered.
A practical example is being highlighted by the current trade wars between the US and China. Admittedly, when you consider the issues at a high level, there are a wide variety of concerns underlying the latest trade maneuvering, but for the tech world, much of it boils down to each region wanting to deter the influence or participation of major companies from the “other” side in their home country. We’ve already seen this play out with companies like Google and Facebook being banned in China, and the US blocking the use of Huawei and ZTE telecom equipment and China Mobile from participation in US markets.
In addition to these big picture differences, there other more subtle factors that are influencing tech-related relations between the countries as well. For example, many large Chinese tech companies, including Baidu, are squarely focused on Chinese domestic market needs and show little concern for other potential regional markets around the world. Given how large the Chinese domestic market is, this certainly makes business sense at many levels, but it’s noticeably different from the more global perspective that most major US tech companies have. (For the record, some Chinese-based companies, like Lenovo, do have a global perspective, but they tend to be in the minority.)
The practical result of this region-specific focus could end up being a natural selection-type evolution of certain technologies that creates regional “species” which have crucial differences from each other. Hopefully the gaps between these regional technological species can be easily overcome, but it’s not inconceivable that a combination of these differences along with regionally driven regulatory variances (and a dash of politically driven motivations) end up creating a more technically diverse world than many have expected or hoped for.
To be clear, the vast majority of current technological developments are not being geographically limited. Plus, there are still many great examples within the tech industry of companies from different regions working together. At the Baidu event, for example, Intel was given a large chunk of time during the main keynote speech to highlight how they are working with Baidu on AI. The two companies talked about the fact that Intel silicon is still a key part of how Baidu plans to drive its Paddle Paddle AI framework and overall AI strategy moving forward—despite the announcement of Baidu’s AI-specific Kunlun silicon.
We are, however, reaching a point in the worldwide tech industry’s development that we can no longer ignore potential regional differences, nor assume that all tech advancements are following a single global path. Given the incredible potential influence of technologies like AI on future societal developments, it’s critical to keep our eyes wide open and ensure that we guide the path of technology advancements along a positive, geographically inclusive route.
Reading Time: 3 minutes
BEIJING, CHINA – AI is well known as a hot area of innovation from the likes of Google, IBM, Microsoft and – Baidu? You may not have heard of the Chinese tech giant but it’s starting to make waves across a range of product areas including advanced chips, robotics, autonomous vehicles and artificial intelligence.
At its annual Baidu Create conference here, Baidu announced a partnership with Intel, an advanced chip architecture of its own design and a new version 3.0 of its Baidu Brain AI software.
As part of the Intel partnership, Baidu announced Xeye, a new camera designed to help retailers offer a more personalized shopping experience. The camera uses Baidu’s advanced machine learning algorithms to analyze objects and gestures as well as detect people in the store. It also leverages Intel’s Movidius vision processing units (VPUs) to give retailers low-power, high performance “visual intelligence” as to products and activity in the store.
Separately, Baidu is improving machine vision performance via its EasyDL, an easy-to-use “code free” platform designed to let users build custom computer vision models with a simple drag-and-drop interface. Released in November as part of Baidu Brain 2.0, EasyDL applications are being used by 160 grocery stores in the U.S. including Price Chopper. The computer vision application recognizes items left in a customer’s shopping cart by mistake to help ensure that they’re purchased.
The newer Baidu Brain 3.0 makes it easier and quicker to train a computer vision model using EasyDL so, for example, the application designed for the grocery cart can now be developed in as little as 15 minutes.
In addition to Xeye, Baidu also announced it will use Intel’s FPGAs (Field Programmable Gate Arrays) to enable workload acceleration as a service on the Baidu Cloud. “The best is yet to come. We are excited to see the innovative Baidu Brain running on Intel Xeon processors,” said Gadi Singer, general manager of Intel’s AI Products Group who joined Baidu CEO Robin Li on stage.
But Baidu has big chip plans in its own right. During his keynote, Li announced Kunlun, China’s first cloud-to-edge AI chip, designed for high performance AI scenarios. Li said Kunlun will be marketed for use in data centers, public clouds and autonomous vehicles.
Baidu started developing an FPGA-based AI accelerator for deep learning in 2011 and began using GPUs in datacenters. Kunlun, which is made up of thousands of small cores, has a computational capability which is nearly 30 times faster than the original FPGA-based accelerator.
And while the initial market for Kunlun will be China, Technalysis Research analyst Bob O’Donnell said enterprises across the globe would be wise to be aware of Baidu’s growing product portfolio.
“Baidu is a key player for multinational corporations with a presence in China because they’re driving innovation in the same way that Amazon or Google is in the U.S.,” said O’Donnell. “They have an incredibly strong focus on AI across a lot of different industries that’s as broad as any other company I know of. Right now they’re very China-focused, but I expect that to expand over time.”
Chip rivals like Nvidia have made huge strides in support of autonomous vehicles with both hardware and software frameworks and simulation software for testing designed to help car makers get vehicles to market.
Similarly, Baidu has made a big investment in its Apollo software for autonomous vehicles of all sizes, from automated wheelchairs to cars, buses, trucks and other transport vehicles. At Create it showed off the new Apollo 3.0 software that is just starting to be used in autonomous vehicles in campuses and other closed environments in China such as senior living communities.
“We are really excited, this will surely change everyone’s lives,” said Li, who announced the 100th autonomous buses had recently come off the assembly line.
“You can see this is a real automatic driving solution, there’s no steering wheel, brake pedal or throttle, but it’s also very stylish inside,” said Li.
The vehicles are planned for commercial use in both China and Japan.
Analyst O’Donnell said it looks like Baidu’s autonomous vehicle effort is focused on the Asian market for at least the near term. “But they’re really establishing some important benchmarks here with the breadth of what they’re doing that competitors are sure to take note of.”
Reading Time: 1 minute
Ben Bajarin is joined by Ashraf Eassa to discuss Intel and the challenges which have been mounting with the company.
Reading Time: 4 minutes
The concept of Device as a Service (DaaS) has been gaining steam for a few years now, and my team at IDC has done extensive work around this topic. In fact, we’re currently wrapping up an extensive study on the subject that includes a massive multi-country survey of commercial adopters, intenders, and resistors, as well as a forecast that will include our view on the impact of DaaS on the commercial PC, tablet, and smartphone markets. While the momentum in this space is clear, there are still plenty of doubters who like to throw out numerous reasons why DaaS won’t work, and why it won’t bring about the benefits to both buyers and sellers that I’ve outlined in previous columns here and here. Let’s examine some of those criticisms.
There’s Hype, But Is Anybody Really Buying?
The hype defense is probably the most common pushback and question we get when it comes to DaaS, and it’s easy to understand why the average IT professional or even industry insider might be skeptical. But the fact is, we’ve now surveyed hundreds of IT Decision Makers (ITDMs), and talked to most of the major providers, and this isn’t just an interesting idea. We continue to find that DaaS is very appealing to a wide range of organizations, in numerous countries, and across company sizes. The idea that a company can offload some of the most mundane tasks that its IT department deals with while right-sizing the devices it deploys, gathers users analytics, and smooth’s out costs is very compelling. And as the industry has moved quickly from a focus purely on PCs to one that includes additional devices such as smartphones and tablets, interest and adoption will continue to grow.
It’s important to note that even a company completely sold on DaaS won’t make this type of transition overnight. Most companies will start small, testing the waters and getting a better understanding of what works for their organization. In the meantime, there’s existing hardware, software, and services contracts that could still have months or even years left before they expire. Like many things in technology, you can expect Daas adoption to happen slowly at first, and then very fast.
DaaS Costs Are Too High
One of the key areas of criticism leveled at DaaS is that today’s offerings cost too much money per seat. It’s hard to argue with this logic: If an organization thinks DaaS costs too much, then it costs too much, right? But often this perception is driven from an incomplete understanding of what a provider includes in the DaaS offering. Today’s contracts can run from just the basics to something much more complete. Yes, a contract with a full range of services such as imaging and migration, deployment and monitoring, break/fix options and secure disposal can be pricey. But what critics often fail to realize is that their company is paying for these services in some way or another today. Either they’re paying their own IT staff to do it, or they’re paying another service organization to do bits and piece of it (and they’re likely not tallying all the costs in one place). Alternately, some of these tasks—such as secure disposal—aren’t happening at all, which is one of those areas that could end up costing the company a lot more money in the end.
Now with all that said, it’s entirely possible that at the end of the day a company may well end up paying more for its entire pool of services under a DaaS contract. At which point, the questions they need to ask: Am I buying my DaaS service from the right vendor? If the answer to that question is yes, then the follow-up question should be: Are the benefits of managing all these services through a single provider worth the extra cost to my organization? Does it free my IT organization to do other important jobs? The answer may be yes.
Refurbs Will Negatively Impact New Shipments
One of the key benefits of DaaS is the promise that it will shorten device lifecycles which have always been, to me, one of the win/win benefits of this concept. Companies win by replacing employee’s hardware more often thanks to pre-determined refresh cycles. Instead of finding ways to keep aging devices around for “one more year” to push out capital expenditures, DaaS allows companies to equip employees with newer machines that drive increased productivity, offer improved security, and lead to improved user satisfaction. From the hardware vendor side, the benefits are obvious: faster refresh rates that become more knowable over time.
But what about all those PCs collected at the end of a two- or three-year DaaS term? Won’t they cannibalize shipments of new PCs? The fact is, today there’s already a huge business around refurbished PCs, tablets, and smartphones. What the DaaS market could do is create a much more robust, high-quality market of used commercial devices. As with the automobile leasing market, these devices receive regular maintenance, which means a higher quality used product. DaaS providers can redeploy (or sell) these into their existing markets at lower-than-new prices and still drive reasonable profits. Or they can target emerging commercial markets where even ultra-low-cost devices are a tough sell today.
Ultimately, I believe that DaaS will prove to be a net positive in terms of overall shipments for the industry. Even if that proves incorrect, I’m confident it will drive greater profitability per PC for vendors participating in the DaaS market.
DaaS Will Never Appeal to Consumers
It’s true that to date DaaS has been focused on the commercial segment, but its only a matter of time before we see consumer-focused plans come to market. Apple’s success with the iPhone Upgrade Program, where you pay a monthly fee that includes AppleCare+ coverage and the promise of a new iPhone every year, shows there’s already an appetite for this. It also proves that a robust secondary market doesn’t necessarily cannibalize a market (and Apple profits greatly from its resale of one-year-old iPhones). You can easily imagine Apple adding additional services to that program and extending it to include two or three-year upgrade paths for iPads and Macs.
And so it’s not hard to imagine the likes of Apple, HP, Dell, Lenovo and others eventually offering consumer-focused DaaS products. To many, the idea of paying a single monthly fee to one company to eliminate most of the hassle of managing their devices—and to ensure no budget-busting costs when its time to replace an old one—would be too good to pass up.
Reading Time: 3 minutes
Intel 5G Chips allegedly dropped by Apple
Ctech reported on Thursday that Intel documents they reviewed indicated that Intel will not be providing 5G modems to Apple in 2020. Apple has notified Intel it would not use a mobile modem developed by the chipmaker in its next-generation mobile device, Intel executives said in the communications. Further development of the modem component internally called “Sunny Peak” has been halted and the Intel team that’s working on the product will be redirected to other efforts, the executives said.
Neither Apple nor Intel commented on the report.
Reading Time: 3 minutes
One of the darling tech stocks of the last year has been Micron, a relative unknown in the world of technology compared to names like Intel, NVIDIA, and Samsung. With a stock price driven by market demands, and increasing over 90% in the last calendar year, there are lots of questions about the strength of Micron in a field where competitors like Samsung, and even Intel, are much bigger names.
Last month Micron earnings were eye opening. For its fiscal Q3 it had a 40% increase in revenue over Q3 the previous year. Even more impressive was a doubling of profit in that same period. The quarterly results had $3.82B in net income on $7.8B in revenue, with a Q4 forecast of $8.0-8.4B.
NVIDIA, by contrast, had $3.2B in revenue last quarter. Yet the GPU giant is getting more attention and more analysis than a company more than twice its size.
As part of the earnings announcement, Micron CEO Sanjay Mehrotra expressed confidence in the continued demand for memory as well as the ability for Micron to maintain its profit margins with consistent pricing. This was directly addressing a key segment of the financial analyst group that continue to worry that memory demand will dry up and limit the growth potential for Micron. Micron is at a higher risk in that scenario because of its singular focus on memory technology while competitors like Samsung and Intel are diverse.
This Boise, Idaho based company has to answer the same question as the rest of the memory vendors in the tech field: will demand for memory abate with product shifts in the market or when the build capacity catches up?
There are several reasons why we could see demand for both DRAM (system memory) and NAND (long term storage) memory slow down. By many measures the smartphone market has peaked, with only developing nations like China and India still increasing unit sales, but with much lower cost devices. China sales of phones are in flux thanks to trade war and tariff concerns – Qualcomm and Micron are both US-based and are major providers of smartphone technology. The Chinese government is investigating into memory price fixing accusations against all major vendors and a poor outcome there could incur major penalties and unpredictable changes to the market.
But the Micron CEO doesn’t believe those factors will win out, and neither do I. For the foreseeable future, DRAM demands will continue to grow with mobile devices as we increase the amount of memory in each unit. The coming explosion of IoT products numbering in the billions will all require some type of system DRAM to run, giving Micron and others a significant opportunity to grow. And we cannot forget about the power of the data center and, in particular, the AI compute market. NVIDIA might be the name driving the AI space but every processor it builds will require memory, and massive amounts of it.
In the NAND market for SSDs, there is a lot of competition. But Micron benefits from the OEM arrangements as well as the push into more vertical integration, selling direct to consumers and enterprise customers. Micron has made a push to counter the DIY and OEM dominance of Samsung SSDs with its own Crucial and Micron-branded options, a move that is improving with each generational release.
As more customers migrate from spinning hard drives in their PCs and servers to larger capacity solid state drives that are faster and more reliable, there remains a sizeable opportunity for memory vendors.
If demand will continue to increase, capacity remains the next question. When AMD was building its Vega GPU and utilizing a new memory technology called HBM2, the product suffered because of availability. Though Micron was not playing in the HBM (high bandwidth memory) space, it is a recent example of how the memory market is trying to play catch up to the various demands of technology companies.
There are additional fab facilities being built, but if it seems like they aren’t bringing them up as fast as they could, you aren’t alone. New fabs will alleviate capacity concerns but it will decrease pricing and lower margins, something that any reasonable business will be concerned about in volatile markets.
Over the decades of memory production, the market was cyclical. As technologies move from generation to generation, the demand would plummet, followed by higher prices associated with the NEXT memory technology. As use of that memory peaked and fell, the cycle would restart anew. But because of the growth and demand for memory products of all kinds, and the segments of extreme growth like AI and IoT, it looks like this pattern will be stalled for some time.
Reading Time: 4 minutes
A clear shift is happening. A few years ago, this shift was viewed as impossible. The growth stage of the Internet, while it was still a bit immature from a global viewpoint, was driven by free services subsidized by ads. Pundits and experts believed that not only was this the best business and the only one which could achieve scale but that it was the one consumers preferred. The common sentiment was that consumers did not mind ads, or in some cases even liked ads but certainly, the very least tolerated them. For this reason, it was firmly believed the only way to grow a consumer business was with advertising. It seems that entire theory is now being challenged data point, after data point, emerge to show the mature stage of the Internet leading back to business models where the customer pays for something they value rather than get it for free and tolerate ads.
Reading Time: 5 minutes
Despite all the clichés about the challenges of change, the truth is that it can be difficult for people to accept impactful alterations to the way they do things. This is particularly true in the mode of interaction we have with technology-based products because many of the changes occur in ways that aren’t immediately obvious. As tech gadgets, applications and services start to impact more aspects of our lives, there’s a growing awareness of the potential harmful impact of overusing tech-related products.
Together, these issues point to the fact that we’re entering an interesting a new phase in the relationship that people have with technology. Interestingly, I expect this phase to be significantly more impacted by common traits of human nature than in the past. Why? Because for technology products to continue to evolve, companies who create them are going to have to be more cognizant of how those products either influence or are influenced by human factors that they haven’t had to think much about in the past.
The way that people now think about and interact with technology is changing and successful companies need to be mindful of these new perspectives. Part of the reason for this shifting perspective is the fact that our increased exposure to tech-based products has altered our expectations. Many of the changes we encounter with today’s new tech products are subtle evolutions of existing products and are hard to appreciate. Over time, a collection of these changes can certainly produce a notable difference but in this age of relatively advanced, mature product categories, individual features or new hardware models often don’t make much of an impression anymore.
Of course, sometimes individual products or even features can have an immediate and profound impact—if not on the market overall, then at least with certain subsections of the market. Smart speakers, such as Amazon’s Echo or Google Home, for example, are arguably one of the better recent examples of this phenomenon, as they quickly became a commonly used device in households all over the world.
While there’s no single (or simple) answer as to what makes certain devices, applications or services have a broader, faster impact than others, I’d argue one consistent thread across most of them is that they connect to some fundamental aspect of human nature better than others. Whether it be a more intuitive means of interaction (as voice-based smart speakers have enabled), more intelligent and accurate means of analyzing information (as AI-based software tools have started to do), or some other type of capability that a wide variety of people can easily relate to, it’s the right kind of connection to how people think and act that helps products be successful.
Conversely, ignoring key aspects of human nature can prevent certain products from either achieving the level of success that some expect, or from evolving in ways that many predict. At a very basic level, many people don’t really like dramatic changes, as mentioned earlier, particularly when it comes to their technology-related products and services. While some of the resistance to change is certainly age-based, the recent challenges Snap faced when they dramatically modified their app’s user interface shows that even young people can be resistant to significant alternations to their technology products.
In the business world, resistance to major changes in technology is particularly pronounced. That’s why, for example, there are still a lot of companies running 1970s- and 1980s-era mainframes and plenty of other much older software. It’s easy to get caught up in the sleek technology of today’s cloud-based, microservice-enabled software environments, but even in the most advanced organizations, those tools typically only represent a tiny fraction of what they’re actually running across the whole company. Tech companies who ignore those realities and don’t provide tools to ease the transition process between older tools and newer ones (after all, it’s human nature to look for an easier way to get a task done—or just avoid it if it appears overwhelming), are bound to face significant challenges.
For consumers, the influences of human nature appear and the impact it has on interactions with technology-driven products and services manifest themselves in an enormous variety of ways. A recent, somewhat controversial, example involves self-driving cars. While few would argue conceptually about the value of autonomous driving features, there are serious and profound questions about how realistic it is to offer semi-autonomous capabilities (such as Tesla’s AutoPilot mode) in cars. As several recent tragic accidents have shown, when people believe they don’t have to actually pay attention while sitting behind the wheel, many stop doing so. The idea that people are going to maintain the level of concentration and focus necessary to very quickly take over in the event of a sudden change in the driving environment goes completely against human nature. In my mind, that makes these semi-autonomous capabilities potentially even more dangerous than regular human-powered driving because it lulls people into a false sense of security. Until the cars can be completely autonomous and require zero interaction on the part of the driver and passengers inside—a capability that’s still a ways off according to most—offering features that go directly against human nature is a mistake.
There are other instances of how human interactions with technology are not as easily defined and well understood as many think. Most people are creatures of habit and once they get comfortable interacting with devices, applications or services, they’re not overly eager to change. One of the most interesting examples involves PCs and tablets. Though many people were quick to write off both desktop and notebook PCs as dinosaurs once tablets like the iPad arrived, it’s now clear that a keyboard-based interaction model is still the preferred method for interacting with powerful computing devices—regardless of the user’s age.
Yet another way in which people’s relationship with technology products and services has changed is just starting to be felt, but I believe it could end up being one of the more profound adjustments in how people interact with technology. In a classic case of “be careful what you wish for,” technology has now given us the ability to have access to most all of the world’s information and all of the world’s people at any time. While that conceptually sounds like an amazing accomplishment—and certainly, in many ways, it is—the harsh reality of that capability is a world that’s grown further apart as opposed to closer together, as most presumed this capability would enable. While the reasons for the growing separation are many and complex, there’s no question that the overuse of technology is starting to take its toll.
As our usage of technology and its influence on all aspects of our lives continues to increase, it’s also leading more people (and companies) to do more public soul-searching about how people interact with tech products and what their expectations for those interactions should be. Just as we’ve started to see ethical questions being raised in the medical field based on advancements in technology there, so too are we starting to see questions about whether “because we can” is really going to be an acceptable answer for many types of advancements in tech.
Despite these concerns, there’s no question that has technology has had a profoundly positive impact on both our personal lives and the world around us. Even potentially challenging upcoming technology advancements like AI (artificial intelligence) are more likely to have a profoundly positive impact for most people than a negative one. But as with virtually everything in life, common sense and human nature tells us that sometimes there’s just too much of a good thing. More importantly, tech companies that can adjust their strategies to better accommodate the growing sophistication in the relationship between people and their products should be posed for a more successful future.
Reading Time: 3 minutes
In Apple’s March quarter earnings, they boasted that their platforms had driven 270 million subscriptions, mostly to third-party apps. Subscription business models seem to be in vogue as Apple and others continue to share the clear growth in subscriptions. Despite this growth, many have voiced their concerns over the subscription business model and how consumers may have a hard time tolerating dozens of subscriptions and could get burnt out by this business model.
Reading Time: 4 minutes
There has been a lot of speculation lately that Apple is getting ready to create some media bundle that would be under subscription. According to multiple sites, the idea would be to bundle all of their media properties under a special program that mirrors something like Amazon’s Prime services.
I have no direct knowledge that this will happen but if you read the tea leaves surrounding Apple’s various acquisitions and new media emphasis, it is not too hard to see this possibility.
With that in mind, their most strategic investment so far this year that could be related to this is Texture, the magazine subscription service that has close to 200 magazines in this service for $9.99 a month. I am a big fan of Texture and use it almost daily to read highlighted articles they put in a particular article overview section as well as actual magazines I like to read, especially the food, sports and news magazines available.
The service itself has grown from about 20 magazines at launch to the 200+ available today, and now that Apple owns it, I suspect it will add many more new magazine titles to its offering in the future.
Recently, Apple has reintroduced a more robust version of their free News app, which includes human curation as well as ML and AI-based algorithms that try to block fake news and only deliver well researched and well-written stories that would be of interest to their customers.
I had a briefing on this News service when it relaunched and was highly impressed with the journalistic talent Apple has brought in to help curate this site as well as contribute to its content. In the past, I had not spent a lot of time in the Apple News app, but since it is now placed on the home screen of my iPhone X with the iOS 12 public beta, I find myself checking it multiple times a day to keep up with the news.
These two apps have become vital to me since getting fact-based news, and quality content has become critical in these days where fake news and political bias is blasted over social media and Facebook, Twitter and other sites that struggle with keeping this type of rhetoric in check.
Indeed, what has become essential to many is the ability to go to a site and know that what is there is the result of well-researched journalism and, at least in the news reporting sense, the stories they post are well written by professional journalists and can be trusted to be accurate.
Now that Apple also has Texture, it would be smart for them to integrate part of their News service into the Texture subscription service and innovate on both fronts. These combined properties could become vehicles for more in-depth stories, documentaries and even commentary that would be based on more personalized preferences. Apple is also big into AR. This could be the place where they integrate AR content and visual views and functionality in the News area which they control or even create dedicated AR/MR based magazines that would be part of Texture.
What makes this possible is that Apple has created a set of platforms that can easily reside on top of their already successful subscription services like iTunes and Apple Music. While Texture has its own subscription platform today, Apple could either integrate that into their existing subscription platform or take the magazine content and add it to their subscription infrastructure that they have now.
However, if they were to follow Amazon’s Prime example, then any service would also need to have video content too. Apple is way behind their competitors when it comes to video offerings and especially homegrown content, and they would need to be more aggressive in acquiring or creating much more movie, TV and theatrical shows that would be able to compete with Amazon, Netflix and others.
In our Techpinions Podcast last weekend, Ben Bajarin and Arthur Greenwald, a well known Hollywood producer who co-produced the Mr. Rogers show and knows a great deal about creating original content, gave a good perspective on the challenges Apple has when it comes to attracting talent and getting quality original shows to market.- Ben, summarize Arthurs key points here:
- Apple’s challenges will be dealing with content failures and Hollywood’s “characters” who sometimes get into trouble socially
- Apple is still experimenting and does not know if they want to be a network (like ABC) or premium channel (like HBO)
- There are baseline metrics for success that indicates a shows return on investment. Apple’s bar may be different but hits are few and far between in network television.
- Global success is a gold mine with content if it can be achieved. Apple’s potential upside is the global scale of their customer base.
Given these challenges, I would not be surprised to see Apple buy a couple of the smaller video production companies that already have hits in the market to accelerate their original programming. Also, given their tight relationship with Disney and Pixar, I could even see them trying to tap into some of their skills and content as well.
However, well planning by Apple when it comes to subscription infrastructure puts them in an excellent place to deliver an encompassing media subscription service, which makes it somewhat likely they will offer something like this in the near future. More importantly, with the right mix of content services and priced competitively, it could add significant revenue to their services program and bring more people into Apple hardware, software, and services ecosystem.
While still mostly a rumor, odds are that Apple is moving in this direction and Texture and an even more innovative News application could become one of the cornerstones of any subscription service they offer.
Reading Time: 1 minute
Ben Bajarin is joined by Arthur Greenwald for a discussion on how tech companies are further investing in original content and the potential challenges and opportunities that lie in the media and entertainment business.
Reading Time: 4 minutes
Let’s face it, this has been a crummy year for tech. From the exposition of outright fraud (Theranos), shoddy business practices, numerous examples of inappropriate (and worse) corporate and workplace behavior, data and privacy breaches, concern about the ‘bigness’ and ‘dominance’ of certain companies, worries about about screen addiction…the list goes on. But as we close out the first half of the year and head into the July 4th holiday, perhaps it’s not a bad exercise to step back and recognize some of the good things about tech.
This is not a review of “top apps” or “best gadgets”. Rather, this is my own, admittedly subjective list of some everyday apps, tools and capabilities that are just plain important and useful to most consumers. There are surely downsides to each of these, but a good gauge is how much you would miss them if they suddenly disappeared.
Google Maps. I marvel at just how well Google Maps generally works, and how it just continues to improve, without fanfare. Just think about how generally accurate it is, and how many major and minor features have been introduced that make the Google Maps increasingly useful. There isn’t a huge amount of competition for Google Maps, and nobody really cares.
Smartphones. No doubt there are downsides to the smartphone. But step back and just think about how many different things can be done on this little pocket computer. Even mid-priced smartphones are fantastic. And, given how many hours a day smartphones are used and how many functions they perform, it’s remarkable how generally reliable they are.
WordPress. There are many terrific publishing platforms and content management systems. But WordPress is the granddaddy. It has enabled tens of millions of individuals and small businesses to set up beautiful, highly functional websites with relatively little training. There’s a great ecosystem of add-on tools and features.
Content-a-Looza. We’re all highly aware of how digital and the internet is impacting huge industries, such as print media, publishing, and so on. Not to diminish that at all, but on the opposite side, it’s amazing how low the barriers are to both creating and publishing content across multiple forms of media. Consider how quickly and easily one can publish a long-form story on Medium, upload an innovative clip to YouTube, get a song onto SoundCloud, start a podcast with a colleague, etc. And the hardware and software tools to enable these creations are just so much cheaper and more accessible than they used to be. Sure, there’s a lot of crappy content out there, monetization is challenging, traditional curation and entire industries are being up-ended — but on the other hand, there’s the rise of an entire creative class, be it profession or hobby, that might not have ever been.
Wikipedia. The content might not always be 100% accurate or up-to-date, but Wikipedia is incredibly useful, offering generally good content across a huge number of topics and categories. That it’s a non-profit and exists on average $15/year donations from millions of people is also testament to some of the good things about the Internet. And 99% of people have no idea how the content gets up there…it’s just there.
Travel Apps. On the one hand, the UI of the leading travel apps haven’t changed in, seemingly, a decade. On the other hand, if suddenly a business trip landed in your lap, you could book a flight, hotel, and car – at reliably competitive prices – in less than 10 minutes, and in fewer than 20 total clicks. Seriously, try it. Consider what has to really happen at the back end to make it all happen. And how frequently things change. Dizzying.
The Cloud. I speak about the Cloud here from a consumer, not a business standpoint. It’s probably the most game-changing framework since the advent of the PC. Consider that, ten years ago, if your PC crashed it was a complete disaster. Now, if you’ve taken the right precautions, the PC itself is practically disposable, since everything is stored elsewhere. The cloud has also helped unleash competition to what were seemingly entrenched businesses: think Quicken to Mint, iTunes to Pandora/Spotify, Outlook to Gmail, and the world of streaming content. All of these would be nearly impossible without the unfathomably steep drop in the price of storage and the industry’s nearly universal embrace of this new business framework.
Crowdfunding. Perhaps this is a personal favorite, but I think crowdfunding represents some of the best possibilities of tech and the internet. Crowdfunding has helped fund millions of people/projects that never would have had a chance of getting financed. The projects tend toward the creative side, which is great. Crowdfunding offers a nearly instant feedback loop on an idea’s viability (and not always correct, on either side, but that’s life, too). I’m also impressed that so many people give to projects when what they get back is relatively minor or nothing at all. We see people’s optimistic, generous, and also gullible sides, exposing among the more human sides of the Internet.
There are downsides to everything, and certainly good cause for conversation about the big picture impact of tech. But it is a useful exercise to occasionally step back and appreciate how effective and useful some of this stuff is, and to applaud the millions of bright, honest, hard-working people who helped create it. Happy 4th.
Reading Time: 5 minutes
Duplex is out for Public Testing
This week at an event in Mountain View, Google demoed the public release of Duplex to a bunch of reporters. The demo took place at Oren’s Hummus Shop in Mountain View but no video recording was allowed so we have mostly an account of what happened through the reports’ stories. After the software says hello to the person on the other end of the line, it will immediately identify itself: “Hi, I’m the Google Assistant, calling to make a reservation for a client. This automated call will be recorded.” (The exact language of the disclosure varied slightly in a few of the different demos.)
Reading Time: 3 minutes
Over the course of the last week or two, rumors have been consistently circulating that Qualcomm has plans for a bigger, faster processor for the Windows PC market coming next year. What is expected to be called “Snapdragon 1000” will not simply be an up-clocked smart phone processor and instead will utilize specific capabilities and design features for larger form factors that require higher performance.
The goal for Qualcomm is to close and eliminate the gap between its Windows processor options and Intel’s Core-series of CPUs. Today you can buy Windows 10 PCs powered by the Snapdragon 835 and the company has already announced the Snapdragon 850 Mobile Compute Platform for the next generation of systems. The SD 835-based solutions are capable, but consumers often claim it lacks some of the necessary “umph” for non-native Arm applications. The SD 850 will increase performance in the area of 30% on both processor and graphics, but it likely will still be at a disadvantage.
As a user of the HP Envy x2 and the ASUS NovaGo, both powered by the Snapdragon 835, I strongly believe they offer an experience that addresses the majority of consumers’ performance demands today, with key benefits of extraordinary battery life and always-on connectivity. But more performance and more application compatibility are what is needed to take this platform to the next level. It looks like the upcoming Snapdragon 1000 might do it.
The development systems that lead to this leak/rumor are running with 16GB of LDDR4 memory, 256GB of storage, a Gigabit-class (likely higher) LTE modem, and updated power management technology. It also is using a socketed chip, though I doubt that would make it to the final implementation of the platform as it would dramatically reduce the board size advantage Qualcomm currently has on Intel offerings.
Details on how many cores the Snapdragon 1000 might use and what combination of “big.LITTLE” they might integrate are still unknown. Early reports are discussing the much larger CPU package size on the development system and making assertions on the die size of a production SD 1000 part, but realistically anything in the prototyping stage is a bad indicator for that.
It does appear that Qualcomm will scale the TDP up from the ~6.5 watts of the SD 835/850 to 12 watts, more in-line with what Intel does with its U-series parts. This should give the Qualcomm chip the ability to hit higher clocks and integrate more cores or additional systems (graphics, AI). I do worry that going outside of the TDP range we are used to on Qualcomm mobile processors might lead to an efficiency drop, taking away the extended battery life advantage that its Windows 10 PCs offer over Intel today. Hopefully the Qualcomm product teams and engineers understand how pivotal that is for its platform’s success and maintain it.
Early money is on the SD 1000 being based on a customized version of the Arm Cortex-A76 core announced at the end of May. Arm made very bold claims to go along with this release, including “laptop-class performance” without losing the efficiency advantages that have given Arm its distinction throughout the mobile space. If Arm, and by extension Qualcomm, can develop a core that is in within 10% of the IPC (instructions per clock) of Skylake, and have the extreme die size advantages that we think they can achieve, the battle for the notebook space is going to be extremely interesting towards the middle of 2019.
Intel is not usually a company I would bet to be flat-footed and slow to respond in a battle like this. But the well documented troubles it finds itself in with the 10nm process technology transition, along with the execution from TSMC on its roadmap to 7nm and EUV, means that Qualcomm will have an opportunity. Qualcomm, and even AMD in the higher-end space, couldn’t have asked for a better combination of events to tackle Intel: a swapping of process technology leadership from Intel to external foundries, along with new CPU and core designs that are effective and efficient, mean we will have a battle in numerous chip markets that we have not had in a decade.
These are merely rumors, but matching up with the release of the Arm Cortex-A76 makes them more substantial – Qualcomm appears to be stepping up its game in the Windows PC space.
Reading Time: 3 minutes
Last week I talked about the coming of the super bundles. The overall arc of that piece was more about what super bundles are and why consumers will be drawn to them over traditional cable packages. Throughout the article, I mentioned some companies were positioned better than others, namely Amazon and Apple. In the recent days, Apple’s strategy here is becoming more clear. And, this recent piece in The Information seems to confirm my prediction that Apple would indeed launch a super bundle.
Reading Time: 3 minutes
During one of the last WSJ-D conferences, then Sony CEO Sir Howard Stringer was on the stage speaking with D-Conference co-chairs Walt Mossberg and Kara Swisher about how he was working to change Sony’s siloed business model. But just as the session was about to start, Martha Stewart came into the room carrying a hand full of cables with different plugs on them and yells out to Sir Howard that he needs to fix the problems of multiple connectors and incompatible cables. The room erupts in laughter at the site of Ms. Stewart waving a handful of cables in front of Mr. Stringer, but they also applaud her for bringing up a huge problem that has faced consumers for decades.
Reading Time: 4 minutes
Over the past six months, technology has had its fair share of bad press. We have had many stories covering social media with fake news, online harassment and users’ tracking, kids and screen addiction, AI stealing our jobs, and robots taking over the world. This past Saturday, however, there was a New York Times’ story about how domestic abusers take advantage of smart home tech that made me think of the challenges that brands, as well as governments, will increasingly face.
You heard me say it before: technology per se is not bad, but humans can certainly use it in ways that can cause harm to themselves or others.
Two Sides of the Same Coin
I have to admit it never occurred to me that smart home technology could be used to inflict more pain on victims of domestic abuse. The Times referred to reports by abuse helplines that saw an increase over the past year of calls about abusers using smart home tech for surveillance and psychological warfare. Victims mentioned thermostats being controlled to make the house too hot or too cold, doorbells ringing when nobody is at the door, door-locks pin codes being changed preventing access to the home.
Maybe because I am “half full kinda person” I always thought about all the advantages smart home tech brings whether helping monitoring elderly parents or assisting people with disabilities be more independent in their home. Of course, abusers are not new to using technology to track their victims, think about GPS for instance or the use of social media. Even then, I always saw the other side of the coin considering how GPS could not just help me find my phone but find my dog or make sure my child was where she said she was.
According to the National Domestic Violence, Hotline 1 in 4 women (24.3%) and 1 in 7 men (13.8%) aged 18 and older in the United States have been the victim of severe physical violence by an intimate partner in their lifetime. One in 6 women (16.2%) and 1 in 19 men (5.2%) in the United States have experienced stalking victimization at some point during their lifetime in which they felt very fearful or believed that they or someone close to them would be harmed or killed. What makes smart home tech particularly concerning according to the Times is that these new gadgets like cameras, smart speakers, smart locks seem to be used to abuse women in particular. This is because, by and large, men control technology purchases in the home as well as their set up and any service linked to them.
Educating and Assisting
I find that blaming a male-driven Silicon Valley for designing products that might be used to hurt women to be out of place. It is true that, quite often, tech products are designed by men for men, but this does not mean they are designed with the detriment of women in mind.
That said, I do believe that tech companies have a responsibility to think through how technology is used and they should warn of how it could be misused. Of course, it is not easy to add to your smart doorbell instructions manual: “Warning, an abusive partner could use the camera to monitor every person who gets to your door or every time you leave the house.” Companies could, however, work with support agencies to help them understand how the technology could become a tool for abuse so they could advize vulnerable people and teach about it at prevention workshops as well as be prepared with practical steps to be used for a safety planning.
Staying a Step Ahead
Aside from helping prevention and assisting victims, I feel that there is a significant need from the legal system to stay a step ahead when it comes to technology across the board, and the case of using tech for domestic abuse is no different.
The criminal justice system intervention in domestic abuse took over twenty years to get where we are today. And it is far from being perfect! In the early 1970s, the law required the police to either witness a misdemeanor assault or to obtain a warrant to arrest. Only in the late 1970s warrantless probably cause arrest laws passed. In the late 1980s, after the Minneapolis Domestic Violence Experiment was published showing that arrest was the most effective way to reduce re-offending many US police departments adopted a mandatory arrest policy for spousal violence cases with probable cause. When it comes to domestic abuse, it seems however that the first judgment call on whether to proceed with an arrest is whether or not there are visible and recent marks of violence.
Psychological abuse is much harder to prove, and the process puts a huge burden on the victim who, in most cases, is reticent to come forward in the first place. This is what is concerning about how tech can play a role in domestic abuse, and gaslighting in particular.
I am no criminal law expert, but it seems to me that the legal system should not only be educated about how technology can be used to victimize but also, and maybe most importantly, how the same technology can provide information to back up the victim’s story.
We always walk a fine line between civil liberties and policing, but recent history has proven that rushed decisions made in response to an incident as rarely the best. Technology is on the path to get more sophisticated and smarter, at times beyond human comprehension, and sadly the world in recent years has shown that there are plenty of people looking to exploit tech for evil purposes. Hoping for the best is no longer an option.