Podcast: Social Media Impact, Pre-CES TV, PC, Automotive and 5G News

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the impact that social media platforms had on this week’s events in Washington, D.C., reviewing the new TV announcements from Sony and Samsung and the PC announcements from Dell, analyzing the automotive news from Harman and Mercedes-Benz, and evaluating the 5G-related news from Qualcomm and Verizon.

(Note that this was recorded just a few hours before Twitter banned Trump, Google removed Parler from the Play Store and Apple threatened to remove Parler from the App Store.)

The Tech Industries Mission in 2021

Years ago, I stopped doing a predictions article, generally because they seem ridiculous, and most readers view them as fiction. I considered doing a predictions article of things I don’t think will happen, and I still think that is an interesting article but have yet to attempt it. But we still do get questions from clients and media about what we expect in the coming year. As analysts, we get to view the industry from a 20,000-foot view, and there is some benefit to that view in seeing broader trends and connecting macro trends to the microelements of tech.

The challenge for many who try to predict the future is the desire to make a bold prediction, something that will happen that could change a paradigm or bring something dramatically new. A new market, a new technology, etc., is what excites people, but it is not what I expect in 2021. Rather, the challenge we saw in 2020 and the rapid movement of digital transformation happening in both enterprise and consumer markets put a giant spotlight on the reality that tech still has a long way to go. There are still huge numbers of pain points, frustrations, less than elegant solutions that need to be fixed.

So for 2021, the tech industry’s mission is to double down on solving the significant pain points that exist all around us as we use digital devices. I expect this to be able to happen much sooner than years before, largely because of the incredible progress of cloud computing. When you look at the core reason many tech companies were able to make the adjustment to work from home, even if it was not perfect and inelegant, the reality was if this COVID-19 pandemic would have hit a few years earlier, many companies would have had no chance at a transition and many businesses would have come to a screeching halt.

In many ways, I remember how the continual advancements in Internet infrastructure of the 2000 decade led to the massive industry opportunities that hit all at once. Most of the decade was more evolution and solving pain points, and that work led to the mobile era. In many ways, I view the cloud infrastructure and the underlying innovation in silicon as a similar laying of the groundwork for what’s next. But what is next is not coming this year, and that is ok. As I said, there are still many pain points to solve.

Which areas can we expect more digital transformation and a continued emphasis on solving pain points? Well, nearly all of them, but the tech industry’s journey has always been one that is a transition from analog to digital. This is why every company will someday be a tech company, if not by-product then by the process. All processes move from analog to digital, and many products similarly will include more digital elements. This is one reason why the auto industry, and in particular Tesla, has been talked about so much. That is an industry whose products have all but abstained from modern technology. Tesla brought the auto industry into the digital age, and there is no going back. I include this industry in solving pain points, which leads to many opportunities ahead.

I expect more companies to embrace direct-to-consumer, and the number of new brands and new products to increase thanks to the ease of services like Shopify for brands to control their storefronts and go direct to consumers. 2020’s e-commerce surge will no doubt assist this effort going forward, and shopping and buying online for everything is now an ingrained habit for many consumers. While e-commerce has made great progress, there are still pain points to solve in the total experience, automation, delivery, and more.

This point on D2C includes entertainment content. We already see movies go straight to digital services, and I expect this to be the new norm and offer a wide range of new experiences and new content. However, the current unbundled streaming landscape is a hot mess, and while our research and many others proves more consumers are leaving cable bundles for other content services, they find themselves leaving one set of frustrations and finding a whole batch of new ones. There is a huge opportunity here for someone, and as of now, I’d bet on Disney to become the new super aggregator in some way.

Enterprise software is another example where many pain points remain. While many companies embraced the latest technologies to enable their workforce to remotely, it was inelegant to put it kindly. There is still tremendous work ahead to more seamless blend workflows for individuals and teams, and that effort will pay off even when people start going back to the office. I expect big leaps ahead in solving these pain points for this category in 2021.

Lastly, Healthcare is another where I hope more tech companies can make an impact. If anything, the COVID-19 pandemic has certainly shown how broken much of healthcare is in the US but has also shown a further opportunity for digital, like Telehealth and other digital services stand to benefit and bring new opportunities.

While I highlight a few areas, and I’m sure there are more we will see pop up in 2021, the bottom line is, while not sexy, the tech industry’s focus and main push will focus more on solving pain points than pushing brand new inventions, or innovations. This is a good thing because, in order to bring about the new computing paradigm, we need to solve current problems rather than just create new ones.

Podcast: Big Tech Trends for 2021

This week’s Techpinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell discussing what they expect to be some of the most important tech industry-related trends in 2021. Specifically, they discuss hybrid work environments, the evolution of semiconductors, 5G and connectivity, corporate social responsibility, gaming, and hybrid cloud and edge computing.

Apple Fitness+ Built to Help You Keep Fit Wherever You Are

This week marked the launch of Apple Fitness+, the workout service that Apple announced back in September, together with the new Apple One subscription bundle. I spent some time checking out the trainers and some of the videos across a range of workouts. While jumping around to check everything out and have a sense of the service, I was glad to see that diversity and inclusion were clearly top of mind for Apple. There’s a good mix of female and male trainers. There are different body types, skin colors, ethnicities, and even accents all represented. The trainers are making an effort to be as inclusive as possible by suggesting modifications to the workout for beginners and using sign language to welcome and encourage users who might be hearing impaired. I also noticed different-limb people are working out in the studio with the trainers.

As expected, the overall number of trainers is still limited, but I do not doubt that it will grow over time as the user base grows. It will certainly be interesting to see if the talent acquisition battle in tech will expand to include trainers, given how much of the experience is directly linked to them.

Apple Fitness+ vs. Peloton

The first class I took was a cycling class as I was curious to compare it to my Peloton experience.

First, let me say that I’m not a hardcore Peloton user. I just started at the end of September, and I am on ride 61! I got to Peloton with a good dose of skepticism. I really wasn’t interested in the whole Peloton Tribe, class and instructor fandom thing that I saw so many users talk about all the time. And yet, in such a short while, I came to change my mind about those instructors and how much they drive the actual engagement.  Peloton also offers quite a range of instructors in a similar way to what I explained about Apple Fitness+. I made an effort to try out different instructors gravitating towards US-based and UK-based instructors and ultimately favoring women over men. I then landed on two particular trainers, Tunde Oyeneyin and Ally Love, who I find to offer the right balance of carrot and stick, but most of all, some inspiration and mind release at a time when we all can do with some. I also appreciate the effort by Peloton in creating theme-based classes rooted in celebrations like Black History Month, International Women Month, or LGBTQ or different music, which is used as a way to open people up to something new.

With this as my experience backdrop,  the other day, I got on my Peloton, I set it on “just ride” where you see your stats, but there is no instructor, and I started a workout with Apple Fitness+ and Sherica.

You start the workout, and your Apple Watch is in total sync with the device you are using, being the iPad, Apple TV, or the iPhone. You can control your workout from the device or right from your wrist. I used my iPhone set on the side of the bike for this specific workout but I would turn to Apple TV and the larger screen experience it would offer if it were not so early in the morning. I could see my heart rate data, where I ranked on the Burn Bar, and the workout’s elapsed time. The data displayed made up for some of the data I usually see displayed on the Peloton screen. Sherica was an excellent instructor, maybe a bit less polished than those I rely on the most with Peloton but effective nonetheless. One thing that struck me was how much she was panting, which made me feel better about my own panting, of course. This is not something I ever notice with Peloton instructors even when they say they are finding the workout hard. What I missed the most in my workout experience, which shows Apple’s limitations of not being able to tap into the equipment you might be using, was the resistance set up.  The instructor could only suggest increasing or decreasing resistance, but there was no base of reference for any adjustment I was making. This made me feel a disconnect between the call out on the RPM and how hard the instructor wanted me to work. In the end, I burned more calories and reached a higher output goal than I usually do when not taking a class, proof that I worked harder than I do on my own even if I did not benefit from an “informed” instructor.

The more I thought about my experience, the more I came to conclude that the narrative around Apple not wanting people to go back to the gym when they reopen is a false one. On the contrary, Apple is clearly thinking about the opportunity that the gym will offer to drive value through the Apple Watch’s connection to the equipment. Remember that Apple already started to drive integration between Apple Watch and gym equipment such as treadmills, ellipticals, indoor bikes, and rowers. Think about going back to the gym and using any equipment, your Apple Watch, and AirPods and do a class without having to pay a personal trainer or pay extra for a class. It seems like a compelling proposition to me.

One Service Can Fit All

The ultimate hook Apple Fitness+ offers that will create real stickiness to the service is flexibility. It ranges from providing a quick workout with no equipment needed while you’re at home or on a business trip, to coming with you to the gym and be that loyal fitness coach who supports you through your effort to stay fit and healthy.

One interesting thing I noticed with Apple Fitness+ is that the workouts seem to end quite abruptly at the end of the allotted time, but this is because the different workouts are intended to be added to one another to create a longer workout session. So while Peloton has a warmup and a cooldown period with the ability to add more if you want to, Apple offers a warmup but no cooldown. Apple also provides specific videos that walk you through equipment setup, posture, and so on, while Peloton reminds you of how the bike works at the start of every workout. As a user, you can skip this part, but I am guessing given that Peloton is responsible for the bike, they are addressing any possible liability by providing the reminder.

I will continue to explore workouts and take more bike classes, but I will not cancel my Peloton subscription just yet. I am curious to see how Apple will grow Apple Fitness+ because it seems to me that this might be the first product that they aimed at the broadest set of users by appealing to first-time users and fitness enthusiasts. One aspect I am particularly interested to see is how Apple intends to facilitate discovery. This is not something you can walk into an Apple Store and try, which makes me think that the tie into gyms makes even more sense if Apple can link the subscription to the gym and users log into equipment through their Apple ID. Time will tell, but considering Apple made Apple Watch the center of the Fitness+ experience, I expect Apple to make a significant and prolonged investment in the service.

Podcast: Cisco WebexOne, Marvell Open RAN, Facebook Antitrust, T-Mobile 5G, Windows Arm 64, Google Workspace

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news from Cisco’s WebexOne conference, discussing new chips from Marvell for the 5G Open RAN market, debating the Facebook antitrust news, talking about T-Mo’s new 5G Mobile hotspot and related plan, Microsoft’s addition of support for 64-bit Windows app emulation on Arm-based PCs, and Google’s addition of new features for Workspace that make it easier to operate with MS Office files.

PC Technologies and Product Categories to Watch in 2021

As the year winds down, I’ve been thinking a lot about the big product announcements and technology shifts we’ve seen inside this tumultuous year. While we’re all hoping 2021 will at some point bring our lives back toward something more like normal, the impact of COVID on the technology markets will carry on throughout the year. What follows is not so much a list of predictions but a list of PC-focused topics worth continuing to watch closely in the new year.

Sustained PC Growth
As we entered 2020, the conventional wisdom pointed to a flat to modestly down year for traditional PCs, which had just seen the final commercial laggards make the move from Windows 7/8 to Windows 10. Early in the year, as China faced the first pandemic-forced shutdowns that would impact the supply of PCs (for the world) and demand for the products inside the country, it looked as if the pandemic might have a broadly negative impact on the PC Market. We know now that wasn’t the case, and as consumers, businesses, and schools around the world moved to live, work, and learn from home, demand for especially notebook PCs skyrocketed. Throughout the year and into December, we’ve seen demand far outstrip supply, and as we head into 2021, a great deal of that demand is still waiting to be filled.

Expect the first half of 2021 to drive very good PC volumes. At present, the second half is less clear, but assuming the supply chain can finally catch up with existing demand, we are likely to see volumes drop off some by then. The real question: Once we’ve widely distributed vaccines, and the world returns to whatever the next normal looks like, will the PC retain its newly reestablished important role? Or will it slide into the background again as people shift back toward greater mobility, putting down their notebooks and picking up their smartphones again? We’ll be watching closely to determine if 20/21 sets a new, lasting TAM for PCs or if the market quickly reverts to its pre-2020 rhythms and market totals.

Expanded Silicon Diversity
One genie that’s not going back into the bottle is the industry-wide shift toward great silicon diversity in PCs. For a very long time, Intel ruled the PC space, with AMD playing the foil to its total market dominance. After several years of process challenges, missed deadlines, and product shortages, Intel finds itself in a dramatic battle with a newly resurgent AMD that not only has highly competitive products but the institutional patience to grow its share at a deliberate, sustainable pace. Of course, in the past, Intel has often done its best work when under challenge. With its first 11th Gen Core products shipping now to consumers, a new Evo platform story to tell, and the Vpro-branded commercial products set to hit in 2021, Intel is ready for battle.

Beyond X86, we’ve got smartphone silicon behemoth Qualcomm continuing to iterate on its PC-focused Snapdragon chips, too. While it has yet to have a breakout hit in the space, product wins with vendors including Microsoft, Samsung, Lenovo, and others suggest there is market viability if it keeps iterating.

Finally, this quarter Apple began shipping its first Apple Silicon-based products in new Macbook Airs, Macbook Pros, and Mac Minis. Many were disappointed Apple didn’t target a lower price point with these new products. Still, I expect they will do just that at some point in the future, as the company shifts toward a model comparable to their iPhone lineup that sees the current lineup shift downward in price but stay in the market, as the next generation of products launch. In the meantime, Apple seems to be driving substantial performance gains from its new chips, although there will be heated discussions about the veracity of those claims and the benchmarks that drive them well into the new year. All told, 2021 should see some fascinating movements in silicon.

5G PC Attach Rate
Another silicon-based area to closely monitor in the new year is the growth in cellular-connected PCs. This has long been a favorite topic of mine, as in the “before days,” I traveled extensively and had little use for a PC without an LTE connection. While few of us are traveling at present, an unexpected side benefit of an always-connected PC has been the ability to use the LTE (or in some cases now, 5G) modem instead of connecting to an overcapacity home broadband network. While your partner, children, relatives, and others fight over limited throughput on the router for Zoom calls, Youtube streams, and the like, a user with a connected PC can collaborate with relative ease.

I don’t expect a radical shift toward LTE/5G enabled PCs for employees working at home. Still, I expect many companies to look at this option as they continue to figure out where their workforce will sit in the future and how they will pay for connectivity. As more organizations shift toward smaller office setting designed to facilitate meetings versus housing the entire staff every day, connected PCs make even more sense from an infrastructure built-out perspective. One thing is clear: The PC vendors need to build better relationships with the powerhouse carriers in all regions to make these types of technology shifts possible.

Accessories Bonanza
PCs have seen the most headlines, but accessories are a category that has also enjoyed a banner year in 2020. With so many people outfitting home offices, or kitchen tables, with all the pieces necessary to drive productivity, we’ve seen huge growth in monitors, webcams, microphones, keyboards, mice, and headsets. For a time, midyear, it was literally impossible to buy a brand-name webcam anywhere.

I expect these categories to continue to see substantial volumes through much of 2020 as consumers, companies, and schools continue to adapt to what will likely be a mix of in-person and from-home activities throughout this year. Hastily purchased accessories will get replaced by better quality ones. And in many situations, we may see people looking to procure one set of accessories for use at home and another for us at work. Products that help users efficiently shift between smartphone and PC, PC and tablet, and tablet and smartphone will be in demand. Expect growth in the ear-worn wearables category to continue to grow by leaps and bounds.

Here Comes DaaS
Finally, to close out, I return to Device as a Service (DaaS), one of my favorite topics. All signs point to the pandemic as a driving force in many company’s broader digital transformation efforts, of which DaaS can be a crucial part. For many, shifting from a traditional model of procuring, deploying, and managing devices themselves to one where they pay an OEM or MSP to do this work offers clear advantages in terms of efficiency and workload. Based on my conversations with players in this space, COVID has accelerated many companies’ interest in DaaS, especially as they look toward the future of their distributed workforce and how they equip them to drive productivity. Expect to hear a great deal more about DaaS in 2021 as companies of all sizes take a closer look at the benefits it offers.
This has been a very challenging year, and like most people, I’m eager to close out 2020 with an eye toward a challenging but hopeful 2021. Many thanks to all who read me here at Techpinions. Happy Holidays and I look forward to seeing you all in 2021.

My Experience with The Mac mini M1

For the past few weeks, I have been using the M1 power Mac mini as my primary day to day computer. I have not lived through as many Apple processor transitions as others who have been sharing their thoughts, but I vividly remember Apple’s transition to Intel. Ever since the late ’90s, Mac’s have been my primary computers. I have fond memories of bringing my Mac into meetings with PC OEMs and Intel in the early 2000s and always taking flack for not using Windows and Intel or what some would call a real work computer. Which is why I found it ironic after Apple switched to Intel how many Macs I saw floating around Intel when I was there for meetings. That’s another story.

During the Intel transition, the first Macs running Intel Silicon had a somewhat rocky beginning with many apps not optimized for x86. Rosetta handled the translation of code from Apple’s PowerPC architecture to Intel’s x86. The main thing about that transition that was burned into my mind was endless bouncing icons (the action an icon performs in the dock while it is opening on macOS) and how many times the app either never opened, requiring me to force quit or did not open requiring a total restart. Once apps were open, I recall the experience being mostly solid, with the exception of some frequent crashes. Still, those minutes wasted opening an app as Rosetta translated is burned into my memory.

Rosetta 1’s translation abilities were dependent on the code and Intel’s processing power back at that time, which is not what it is today as the x86 architecture is far more sophisticated and powerful. Given the underlying technology at the time during Apple’s transition from PowerPC to x86, some of these hiccups are understandable in retrospect. Still, many of us early users remember the pain of that transition.

Fast forward to today, and the current experience I’ve had with the M1 powered Mac mini, and it is night and day. After a smooth migration to the Mac mini M1 from my 16″ MacBook Pro, I started instantly picking up some work where I left off on the MacBook. I had a little flashback anxiety as I first launched Superhuman, the email client I run for Gmail, which is fairly lightweight. The Superhuman icon started bouncing in the dock and did so for 20 seconds or more. I briefly had Rosetta 1 deja vu. I immediately quit the app to try again to see what would happen, and it opened instantly. I quit again and opened several times, and each time, it opened instantly, to my relief. I then went on to open nearly all my other applications, which I knew were not M1 native-like Office apps, Zoom, Slack, some audio apps I use for editing audio. All of them took 20-30 seconds at first open but then opened each time instantly after.

What makes Rosetta 2 unique this time around is Apple is translating more like an ahead of time compiler (AOT) than a just in time compiler (JIT). Upon first open, Rosetta 2 is essentially translating all the x86 application code into native M1 instructions, which it will then run at each new open. This is the key experience that led many reviewers to remark on how well x86 apps performed on the M1 and how it felt like a native app. That’s because it basically was a native app after Rosetta 2 translation was performed. This is a huge advantage to Apple being the designer of the Mac CPU. Rosetta could, for the first time, be optimized and co-designed with the M1 and have unique knowledge of each other. This was not a luxury Apple has had in past silicon platform transitions.

I timed each non-native app’s translation process upon first open, and the average time was 26.7 seconds. That’s basically the time it took for the M1 to translate an x86 app to native M1 code. This is pretty impressive when you consider all that is going on under the hood.

Once the translation process was complete, all my non-M1 native apps performed just like I was used to on my Intel-based MacBook Pro. To have a reference, I timed how long it took the same apps to launch on both the M1 and my Intel-based MacBook Pro (2.4 GHz 8 Core i-9 processor). The table below shows the average time to launch, in seconds, and be usable of each app on each system. I timed each app five times and then averaged each out.

M1 Mac mini Intel 16′ MBP
Superhuman 4.41 6.61
Zoom 2.52 2.12
Word 0.94 0.97
Excel 1.89 2.52
Slack 5.65 2.87
Powerpoint 0.97 0.95
Teams 4.05 3.91
Outlook 1.03 0.95
Photoshop 6 7.5
MS Edge 1.81 1.35

As you can see, each system had comparable app times. What was surprising was how none of my work-flows were interrupted as I moved from the Intel MacBook Pro to the M1 Mac mini. Literally zero disruption.

In terms of speed and performance, while I’m not a benchmarker, I did try and tax the M1 system in various ways I could with the software I have. Below was the CPU performance while I opened a native x86 app to run Rosetta 2 translation, scrubbed a 4k video in real-time, while on a Microsoft Teams video call (Teams not optimized for M1). I know it is a weird workflow to test, but it was the most CPU intense software at my fingertips.

As you can see, the spike was caused by the Rosetta 2 translation but never during this 1-2 min span did I see the system become sluggish, unresponsive, or have the spinning rainbow of death known on macOS.

What I found most intriguing about this CPU chart is the M1 has four performance cores and four low-power cores. This CPU chart shows that even the four low-power cores kick in, to a degree, during CPU intensive applications and are not just primarily there for lower-performance tasks.

Suffice it to say, the M1 has gone beyond my expectations right out of the gate, and from the reviews, it looks like I’m not alone. And any localized issues experienced by anyone with some non-optimized apps will be a thing in the past by the end of next year when nearly all, probably all, macOS apps will be optimized for the M1.

The M1 and the future of Macs
I wanted to conclude with a few thoughts on the role the M1 will play for the future of the Mac and Apple Silicon. I’ve long been bullish on Apple’s ambitions with custom silicon since Apple has helped establish the trend of specific purpose silicon away from the old world of general-purpose silicon. We also know Apple’s growing team of in-house silicon designers in-house, which gives them a huge advantage in custom silicon. What is exciting about Apple now challenging their silicon team with setting a new bar for high-performance computing is how those efforts will benefit Apple as a whole, not just with M1 Macs.

The work the team puts in to push the limits of performance-per-watt in high-performance applications will, likely, trickle down to things like iPhone, iPad, future augmented or virtual reality, and more. Meaning, this effort will yield fruit across Apple silicon, not just for Mac hardware.

Having experienced some of the latest processors from Intel and AMD, I am convinced Apple will set a new bar not just in notebooks but desktop and workstations as the M1 scales up to those classes of machines. And this leads me to the last point I want to make.

Apple making processors for Macs is extremely good for semiconductor competition. Not to say that AMD and Intel have been standing still, but both those companies have been focusing on competing with each other and largely competing in the datacenter when it came to pushing performance and high-performance design and applications. Apple has now created a new dynamic where both these companies are now competing with Apple to bring its PC customers a solution that will compete with M1 Macs. If they don’t, Apple could run away with the high-end of the PC market, which would have a drastic impact on the PC category, one I’m not sure Intel, AMD, and the PC OEMs have fully realized yet.

Podcast: Qualcomm Snapdragon 888, AWS reInvent Conference, Android Enterprise Essentials

This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell discussing the debut of Qualcomm’s Snapdragon 888 chip and its potential impact on the premium smartphone market, analyzing the news on custom chips, new computing instances and new hybrid cloud options from Amazon’s AWS Cloud computing division, and chatting about the debut of Google’s Android Enterprise Essentials for simply and securely managing fleets of business-owned Android phones.

Podcast: Verizon-Apple, Apple App Store, AMD RX6800 GPUs, Microsoft Pluton, Apple Diversity

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing Verizon’s recent 5G Swap offer with iPhone 12, Apple’s changing of their App Store fee for developers, reviews of the new M1-based Arm Macs, reviews of AMD’s latest Radeon 6800 GPUs, analyzing the potential impact of the Microsoft’s new Pluton security processor for PCs and its partnerships with AMD, Intel and Qualcomm, and chatting about the new inclusion and diversity officer at Apple.

Gaming Consoles’ Battle Royale

I lost count of how many times, over the weeks leading into the release of Microsoft Xbox Series X and Series S and Sony PlayStation 5 and PlayStation 5 Digital Edition, I was asked which model will sell more. Sales figures often make the headline, but I think this year, there is much more interesting conversation to be had on gaming as a form of entertainment and its role during the pandemic.

It is clear to me that gaming has been changing in more ways than one. First of all, gaming has been democratized by smartphones when we think of the number of people who would consider themselves “serious gamers” and how non-console gaming is no longer seen as inferior. Technology on smartphones and PCs has been keeping pace, if not outperforming in some respects, to deliver a rich and immersive experience that is different because content developers have embraced these devices and created content tailored to them. Titles like Fortnite and Pug G proved the opportunity for success in both brand share and revenue.

With broader appeal came the added value of game streaming as a form of entertainment in its own right. This, in turn, helped expand gaming even further and not just from a number of gamers’ perspective but as a revenue source for merchandize and a content category that drives a level of attention never quite seen before.

A Different Approach to Gaming

This is the backdrop for the market in which the new consoles landed following a clearly different strategy Microsoft and Sony are taking to the reinvigorated gaming market. As much as we can compare and contrast the four options consumers have across Microsoft and Sony, there is more than the hardware that comes into play when these two companies think about their addressable market.

It is clear to me that Microsoft is focused on reaching gamers wherever they might be. Whether you play on Xbox, a PC, or an Android phone, as long as you subscribe to Xbox Game Pass Ultimate, you are guaranteed an ample selection of titles and an experience that can connect you and your gamer friends across devices. Considering that the cloud is at the center of Microsoft’s business, it is easy to understand the shift to game streaming. The recent acquisition of Bethesda Softworks speaks precisely to developing a strong pipeline for Xbox Game Pass Ultimate’s subscribers. As Microsoft’s CEO Satya Nadella said:

“Gaming is the most expansive category in the entertainment industry, as people everywhere turn to gaming to connect, socialize, and play with their friends. Quality differentiated content is the engine behind Xbox Game Pass’s growth and value—from Minecraft to Flight Simulator. As a proven game developer and publisher, Bethesda has seen success across every category of games, and together, we will further our ambition to empower the more than three billion gamers worldwide.”

On the other spectrum of the console world, we have Sony, which sees the future of gaming centered on a much more immersive experience where all senses come together to elevate your gaming experience on a console.  In an interview with the Washington Post, PlayStation’s President and CEO Jim Ryan said:

“We want to give gamers clarity, we want to give them certainty. We want to future proof them so that they know the console they buy will be relevant in several years time. It’s a considerable capital outlay, and we want to make sure people know they are buying a true next-generation console.”

The approach here is about delivering the best hardware and purposefully designed content that can elevate each other. Sony’s first-party content like Astro’s Playroom guide PlayStation 5’s users through the new DualSense controllers while offering content creators an opportunity to see what is possible. Staying focused on console gaming will limit overall reach, but it also means engaging with the most profitable audience.

Covid’s Impact on Demand

 Strategy aside, both Microsoft and Sony face the difficulty of predicting demand in a market that has never quite seen so many different variables playing both in favor and against sales.

Covid’s impact on the economy has dampened consumers’ confidence, negatively impacting spending on non-essential items. Yet, spending more time at home has created a stronger craving for content and entertainment, which might benefit from some redirected discretionary budget that would have otherwise gone to eating out or other entertainment such as movies, theater, and other social activities.

The launch of the new consoles was quickly followed by a worsening of the pandemic and the start of what political and health experts started to call a “dark winter.” The prospect of having to spend the next two to four months at home might drive more consumers to make the investment after initially having dismissed it as unnecessary as life was reopening to the old routines.

We also have to remember that TV and movie productions have also been impacted, limiting new content reaching consumers during the next six to twelve months. This leaves a void that gaming can certainly help to fill. Some big titles like Halo have also been delayed, but the catalog of existing games is so wide on both Xbox and PlayStation that consumers would not be worried that their investment might not payback.  Considering the growth in games sales seen thus far, as reported by NPD at a record $11.6 billion in the April to June timeframe, there is clearly a lot available to purchase. This was an increase of 30% when compared to the same time period in 2019, and a 7% increase over the first quarter of 2020 (January – March) record $10.9 billion.

Time will tell, but I am confident the renewed and expanded love affair with gaming will remain strong even when life will return to be lived out and about.

 

Podcast: Apple Arm-Based Macs, MediaTek Summit

This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell analyzing the debut of Apple’s Arm-based M1 processor and the new Mac Mini, MacBook Air and MacBook Pro that include them and discussing the news from Taiwanese chipmaker MediaTek’s Summit event and the new low-cost 5G modems and Arm-based, Chromebook-focused SOCs that they unveiled.

Custom Mac Silicon Frees Apple to Iterate, or Not, as it Pleases

Apple’s event this week brought a few surprises (three new Apple-Silicon based Macs, not just one) as well as some frustrating nonsurprises (no touchscreens, no LTE or 5G, and no new entry-level starting prices for notebooks). Based on Apple’s deliberately vague testing proclamations, the M1 system on a chip (SoC) certainly appears to be a powerful performer that will also offer substantial battery life improvements. We will know more about both in the coming days as reviewers begin the process of benchmarking and real-world testing. What is clear, however, is that the M1 has already caused Apple to radically rethink the role the processor plays in differentiating products in its lineup. Just as important, I believe it will give Apple significantly more freedom to iterate around its Mac form factors, features, and, eventually, prices.

Three New Macs
I won’t go into too much detail about the new Macs, as Carolina covered those details in her excellent day-two column. Like her, I wasn’t surprised that Apple chose to effectively hold the line on its pricing (with the exception of the $100 drop on the Mac Mini) because it needed to establish out of the gate that the M1 isn’t a low-cost alternative to Intel, but a powerful custom-designed replacement that merits like-for-like pricing.

What’s truly remarkable about this product launch is that by using the same chip across a new Mac Mini, MacBook Air, and MacBook Pro, Apple effectively eliminated one of the key ways the PC industry (and Apple itself) has traditionally segmented its products. Processor performance level and branding have always been a primary differentiator in the market. With the M1, Apple says the quiet part out loud by acknowledging that a single chip, placed into three different thermal envelopes, will drive three different performance levels.

That last part is going to fry a lot of people’s noodles, especially those who have traditionally made buying decisions based on the often small but highly marketed speeds and core count of one system’s processor over another. You can see it now as buyers wrestle with a decision between buying a fanless MacBook Air or a MacBook Pro that offers improved performance predicated entirely on the fact that it has a fan that lets the M1 processor run faster, longer than the one in the Air. Ultimately, I expect that the M1 and Apple’s subsequent Mac processors will lead an increasing percentage of their customers to think less about the processor and what its esoteric speeds and feeds mean to them. But this transition will take time, and it will cause some hand-wringing along the way.

One of the early issues with the shift to the M1 is RAM limitations. All the new Macs offer a standard starting RAM allotment of 8GB and a maximum of 16GB. For years it has been notoriously hard—if not impossible—to add aftermarket RAM to a Mac, but with the M1, it is simply not possible because the memory is part of the SOC. This 16GB limit likely isn’t a dealbreaker for most MackBook Air buyers. Still, it has caused a small but vocal minority of Mac diehards to pump the brakes on new MacBook Pro and Mac Mini purchase because they are unwilling to buy a system with less than 32GB of RAM. Here’s the thing: Conventional wisdom (and experience) may dictate that power users need 32GBs, but that may not be the case with the M1’s Unified Memory Architecture. We will have to wait for the benchmarks and real-world testing to know for sure.

Another notable thing about the new M1-based Macs is that they all support just two Thunderbolt/USB 4 ports, whereas some previous versions of both the Mac Mini and MacBook Pro offered up to four. It is unclear if this is an M1 limitation or an Apple design decision. However, this too may be a dealbreaker for some users, who—in the case of the Mac Mini or MacBook Pro—will then need to look back at the legacy Intel-based products still on offer or wait for subsequent product launches.

No Touch, No LTE, and No New Form Factors…Yet
I was not surprised that Apple opted to go with its existing chassis for these product announcements. Particularly in the Tim Cook era, Apple tends to be quite deliberate when it comes to new product designs, so it made sense that the new products look just like the old products. It was also not shocking that Apple did not add a touchscreen to the Mac and did not roll out an LTE or 5G option for the MacBook Air or MacBook Pro. And to many people’s disappointment, the company did not introduce a new lower-priced notebook. However, the fact that Apple did not do any of this week doesn’t mean that it won’t in the future.
In fact, I see that as one of the great benefits of the move to Apple Silicon. While the company decided the shift to the M1 was enough change for 2020, the flexibility inherent in rolling its own silicon—and knowing the ramifications of a future chip in terms of battery life, performance, heat, I/O, and cost—uniquely positions the company to iterate on the Mac in ways it has never done before.

Obviously, there will be new designs, likely in the service of Apple’s obsessive drive to make everything thinner and lighter. With its own silicon on board, Apple will be free to make design changes without waiting on a partner or making concessions for features it deems unnecessary for the Mac. I’m less convinced Apple will add a touchscreen to the Mac, even though many of us have pushed for it for years. However, the support for iOS apps, enabled by the M1, could mean Apple rethinks this position in the future. There is a slightly better chance that Apple eventually rolls out a Mac with cellular connectivity. This would require a fundamental redesign of its notebook chassis, and if it were to happen, it would likely occur using a 5G radio. There is a great deal of interest in connected PCs today due to the massive shift to work from home, but it’s clear Apple won’t be moving quickly to try to catch that wave.

Finally, I expect Apple to eventually roll out more affordably priced Mac notebooks (note I didn’t say low-end). It is instructive to look at how Cook has approached this in his other categories. Traditionally, it was with waterfalled products—last year’s iPhone drops in price, the previous year’s product also decreases in price, and Apple keeps selling them to reach a wider audience. More recently, the company has launched purpose-built products designed to appeal to more value-oriented buyers, such as the second-generation iPhone SE and the Apple Watch SE. I suspect Apple will begin the process here by waterfalling M1-based products into lower price points as it announces new products with next-generation M-Series processors.
By shifting its lineup away from Intel, Apple will no longer have to deal with people always pointing out that it sells products with years-old chips that look dated versus the other PC players. Yes, Dell, HP, Lenovo, and others will always ship products with the latest Intel processor. But, Apple will argue, this two-year-old M Series processor is still competitive because it is custom-designed to run this product.

I’m eager to see the first benchmarks and to test out one of the new Macs myself. If the new M1 performs as well as Apple suggests, then this silicon transition is likely to have a much more significant impact on Apple (and its competitors in the market) than any previous transitions. This has been a resurgent year for the PC category, and things just got a whole lot more interesting.

Apple’s One More Thing Turned Out to Be Three

Apple announced its transition to Apple Silicon back in June. Since then, industry watchers have been formulating a hypothesis on which Mac will be the first model to sport Apple’s new silicon design. Over the past few events, leakers had left only a few surprises for the official event, but for the “one more thing” event, Apple delivered at least a couple from a device launch perspective as well as its strategy.

A More Aggressive Transition

The MacBook Air was the best bet when guessing where Apple would debut its own silicon. A very popular model in the portfolio, the MacBook Air, would appeal to users who care about mobility, battery life, and a slim design but don’t usually run very intensive workflows. Expectations were met as Apple introduced the MacBook Air as the first home for the new M1 chip.

But Apple did not stop there!

After the MacBook Air, Apple added the M1 chip to a new Mac mini, a model that Apple updated back in 2018. The Mac mini is Apple’s most affordable Mac, and the newly launched model starts at $100 less than its predecessor. This is the only price concession Apple made contrary to what some industry watchers were expecting. Some analysts argued that the in-house design would allow Apple to lower prices without necessarily impacting margins. I was somewhat skeptical of such a move for two reasons. First, Apple is not under any time pressure to get market share. Over the past couple of quarters, sales have been growing due to higher demand driven by Covid-19 and supply issues on the Windows camp. Second, aggressive pricing might have sent the wrong signal on how competitive the new silicon was compared to Intel’s designs. Given the times we are in when people are re-evaluating the tools they are using while working from home, the Mac Mini certainly offers Apple an interesting opportunity.

The big surprise of the event, however, was that the M1 chip made its way into the 13″ MacBook Pro. Most people expected that support for what is considered the most popular Mac model and the model that appeals to more pro users might come in a second wave in 2021 once Apple has some time to put the M1 to a real-world test.

Such a broad portfolio right out of the gate shows the confidence Apple has in its solution overall. The combination of silicon, OS, and apps optimization that Apple claims will deliver unprecedented performance.

The other surprise and sign of confidence on Apple’s part was timing. While we knew a launch would happen before the end of 2020, Tim Cook even confirmed that during the latest earnings call, most expected the first product to ship in 2021.

Macs Get iOS Apps but No Touch

It was fascinating to notice that, at least on Twitter, not many people commented on the lack of touch. It seems as though most have given up even on the idea that Apple might change its mind about adding touch to the Mac.

The M1 ability to support iOS apps without developers having to optimize them would have been the perfect reason to add touch to the Mac. Although they might still not believe in vertical touch, Apple could have explained that they thought users might want that option.

An alternative that could have met users halfway was to add the same cursor solution Apple put on the iPad Pro’s Magic Keyboard, something I hypothesized since the product was released.

Instead, we have neither.

Maybe this is so that developers actually choose to optimize their apps for the Mac so users can have a better experience. It will certainly be interesting to see if Apple can replicate the developer engagement they had on the iPad. You might remember that when the first iPad came to market, Apple had the 2X option that made iPhone apps run on the larger iPad screen out of the box without developers having to do anything. That played a significant role in helping people see the iPad’s potential, but the actual value came when apps were purposely designed for it. With the Mac, Apple was never able to replicate the success of the iOS app ecosystem. The numbers just did not make it worthwhile for mass-market app developers to invest in the Mac. The hope now is that, as volumes grow from the appeal of the consistency between iPhone and Mac experience, developers might feel different about their investment. If this plays out, Apple would be able to achieve even more differentiation against Windows-based PCs, which should be the ultimate game.

The M1 performance and OS optimization might be enough to get Mac users to upgrade, but Apple cannot stop there. We know switching OS is a much bigger decision for people to make, especially in an enterprise environment. iOS apps’ support can really facilitate that move. It would be much easier for an enterprise that is already supporting iOS devices to justify expanding to the Mac than it ever was to think they needed to add Mac support to their Windows support.

No New Designs

Another expectation people had was that together with the new silicon, there would be a new Mac design for whatever product Apple decided to ship first. This did not turn out to be true. Another clue on how Apple is thinking about the transition to its own silicon design.

The shift is not about differentiating within their portfolio, which would have been easier with a new hardware design. The M1 is about perfecting the Mac formula. Changing the design would have distracted from the true value of these new products. It would have diluted the impact of what Apple is building. Some of the benefits the M1 brings could have enabled a change in design, shaving a couple of millimeters here and there or maybe using a different screen technology. Had Apple done that, like for like comparisons with current products might have been harder to make.

At the “one more thing” event, Apple sold one thing only: the power of vertical integration, what they learned, and made them so successful with the iPhone. If you buy into it, Apple will have a much stickier proposition than any hardware design change they would offer.

Oculus Quest 2: Ready For PrimeTime?

I recently started testing the new Oculus Quest 2 virtual reality (VR) headset from Facebook, and it’s a very good product. As I noted back in September, it’s an evolutionary step up from the original headset, with a handful of technical improvements, delivered at a substantially lower starting price ($299). I expect the Quest 2 to sell very well, bringing quality VR to a much wider audience than ever before.

Smooth Setup Experience
The Quest 2 is slightly lighter and smaller than its predecessor, and I found these decreases made it noticeably more comfortable to wear. Some reviewers have complained about the Quest 2 head strap, which is all fabric versus the plastic one on Quest, but I didn’t have any trouble adjusting the fit to my head. That said, it’s clear the new head strap was an area where Facebook shaved cost, and the company offers several after-market versions (starting at $49) for those who want something more robust. The other area where Facebook saved some money is the inter-pupillary distance adjustment. While the original Quest had a slider that allowed for precise adjustments, the new Quest has just three settings. I used the default middle setting, so this also wasn’t an issue for me.

After completing the physical adjustments, running through a setup tutorial, and installing a system update, I was off to the races. I don’t remember much about setting up the original Quest, but with the Quest 2, Facebook has created a smooth and mostly frictionless experience that should be straightforward for even a VR novice.

Notably Better Display and Next-Gen Silicon
One of the significant changes with the Quest 2 is the shift from dual OLEDs to a single, fast-switching LCD that offers 1832 x 1920 resolution per eye. The display supports a 72Hz refresh rate at launch, and a future software update should enable a faster 90Hz refresh rate. In a word, the display looks fantastic. I found the new screen to be even more immersive than the Quest, although when you’re fully engaged in a great game or app, you stop paying too much attention to the pixels. After spending about 30 minutes in the Quest 2, I put on the original Quest, and at this point, the screen enhancements were much more noticeable. Perhaps the most significant improvement on the new headset is the much less perceptible screen door effect.

The Quest 2 also includes a faster processor, Qualcomm’s Snapdragon XR2, and more RAM than the original Quest. I didn’t notice better performance with my existing apps, but I suspect that we’ll see more software take advantage of the better silicon over time. I also expect the new processor to help drive a better PC-tethered experience through the Oculus Link. I haven’t yet acquired the right USB Type C cable to test this feature, but I look forward to doing so soon (and playing Half-Life: Alyx).

The other update to the Quest 2 is to the touch controllers. The new version has a slightly wider, rounder surface area where you place your thumbs. I don’t find them to be noticeably better than the original versions, although I do wish they were plug-in rechargeable versus a standard AA battery. One thing worth noting is that since the launch of the original Quest, Facebook has rolled out hand-tracking capabilities, and I was able to set this feature up in the Quest 2. At present, the apps I’m using require controllers, so I used hand tracking primarily for navigation. But I’m excited to see more apps use hand tracking, as it has the potential to increase the feeling of immersion inside VR dramatically.

Ready for Prime Time?
All told, I’m very impressed by the Quest 2, and the product should sell very well for Facebook this holiday season. In fact, in many countries—including the United States—we are still dealing with a pandemic where the infection rates are going up instead of down, which means smart people will be spending more time at home in the coming months. Throughout much of 2020 VR headsets and the Quest, in particular, have been nearly impossible to buy as demand radically outpaced supply. Our view into the supply chain suggests Facebook has placed massive orders for the Quest. Even so, the headset initially sold out (it’s available again now). However, accessories for the device, including the previously mentioned headstrap, are pretty hard to come by.

So I think the Quest 2 will sell very well through the end of 2020 and into 2021, even as it faces stiff competition from the launch of new consoles from both Microsoft and Sony shipping this month. The Quest 2 should please existing VR users looking for an upgrade, and it will delight anyone who has never used VR or whose only VR experience was in an early smartphone-based product. The Quest 2 is also poised to help drive the continued robust adoption of VR in business.

Is the Quest 2, and VR more broadly, ready for a move into the mainstream? That’s still unlikely. But with each iteration, the hardware gets better and less costly, and the experience more immersive and enjoyable. What the market needs now is more mainstream content. To date, gaming remains the primary consumer driver, and while it is obviously a lucrative market, it’s not going to win over everyone. To date, there’s still no killer app that would make the average consumer buy into VR. Facebook has long suggested that social could be that use case, and there’s no doubt that games with a social aspect have legs in VR. When Facebook launches its upcoming Horizon social platform (currently available as an invite-only beta), we’ll get a chance to see if that is what VR needs to win over the masses.

Podcast: AMD Radeon 6000, Lenovo TechWorld, Tech Earnings, Cisco Partner Summit

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing AMD’s purchase of Xilinx and the debut of their Radeon 6000 GPUs, chatting on news from Lenovo’s TechWorld event, analyzing quarterly earnings from Apple, Amazon, Google, Microsoft, Facebook and more, and speaking about Cisco’s Partner Summit event.

Podcast: Qualcomm 5G Summit, Apple iPhone 12, DellTechWorld, Citrix Workspace Summit

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news from Qualcomm’s 5G Summit event, discussing initial real-world 5G performance of the iPhone 12, chatting about Dell Technologies’ new Project Apex “as a service” offering news from their DellTechWorld event, and reviewing the latest news on employee experience from Citrix’ Workspace Summit event.

Apple Takes Another Swing at the Smart Speaker Market

Apple kicked off its annual iPhone launch event this week by announcing the $99 HomePod mini. I’m excited to try the product, which utilizes several pieces of custom silicon, leverages the company’s strong position in categories such as smartphones and wearables, and once again emphasizes Apple’s research into delivering high-quality sound. All that said, while I’m sure a good number of consumers entrenched in the Apple ecosystem will buy the HomePod mini, I’m still not convinced the product will dramatically change Apple’s overall fortunes in the smart home market.

Impressive Tech
The HomePod mini is an impressive bit of tech, all wrapped up in a 3.3-inch tall, acoustically designed seamless mesh fabric that comes in space gray or white. It leverages Apple’s S5 chip, which first shipped in the Apple Watch Series 5, as the brains of the operation. That’s a notable change from the full-sized HomePod, which uses an A8 chip that first shipped in the iPhone 6. In addition to driving smart assistant functions, Apple says the S5 drives computational audio that adjusts dynamic range and the speaker hardware to optimize sound based on the content that is playing.

The HomePod mini also includes Apple’s U1 ultrawideband chip, which Apple started including in iPhones in 2019, and added to the Series 6 Apple Watch. When you bring a U1-enabled iPhone close to the HomePod mini, it sees the phone and offers up handoff opportunities. For example, if you are listening to music on your phone as you enter the room with the HomePod Mini, you can transfer the audio over to the smart speaker.

Perhaps the most compelling new feature is Intercom, which lets you leverage multiple HomePod speakers (including the original) to make house wide-announcements using a new feature called Intercom. Yes, competitors such as Amazon’s Echo already do this, but Apple’s special sauce is that in addition to its smart speakers, the message will also play out over all the iPhones, iPads, Apple Watches, and AirPods in the house, as well as through CarPlay.

Like the original HomePod, which sells for $299, the mini will offer multiroom audio, stereo pairing, and smart hub features. However, it does not support spatial awareness or home theater with Apple TV 4K like its bigger brother.

Apple’s Challenges
The new HomePod mini is a huge step in the right direction for Apple and should help it make inroads into the smart speaker category where its original, high-priced HomePod has languished. But as Ben noted earlier this week, the elephant in the room remains the issues with the “smarts” behind its smart speaker: Siri. As a smart assistant, it is still not very good. And while Apple can point to stats about how much better Siri is than before, the fact of the matter is that the company has a huge job ahead of it in convincing people who have had poor experiences with Siri to keep coming back and trying it again.

I test a great deal of hardware, and I have easy access to the smart assistants from Amazon, Google, and Apple. And in my personal life, I always use the first two before I turn to Siri. In fact, the only time I use Siri is on the Apple Watch, when I’m on the go. My experiences with Siri have been so frustrating that I took the extra step of installing Amazon’s $50 Echo Auto in my vehicle so I can access Alexa there instead of using the Siri on the iPhone sitting on my passenger seat.
And it is easy to fixate on Siri’s issues versus the smart assistants from Amazon and Google, but Apple’s challenges extend beyond that. In China, for example, companies such as Xiaomi, Alibaba, and Baidu all have voice assistants that my colleagues there say perform better than Siri. According to IDC’s Smart Home Tracker, China is the second-biggest smart speaker market behind the U.S.

Beyond the Siri issues, one of the other significant challenges Apple faces is the fact that many early adopters have already chosen their smart assistant. We have standardized on Echo (seven and counting) in my house, and our utilization has only gone up during the pandemic. It is mostly basic stuff, loads of timers, weather reports, music and podcasts, and occasional questions about store closing times or random facts. We also use Alexa to turn off lights, and we use it all the time to call other rooms or make household announcements (which now show up on our iPhones running the Alexa app).

And while the HomePod mini’s $99 price is way more attractive than the HomePod’s current $299, it is nowhere close to the Echo Dot’s list price of $50, and the fact that you can often buy the Dot for $30 or less. And that is a bit of an issue, as smart speakers really begin to show their value when you have more than one. Part of the reason we standardized on the Echo was the simple fact that it was affordable to put them throughout the house. I have no doubt that the HomePod mini will sound better than my current third-generation Dots, but in my house, sound quality is important only in a few rooms, and frankly only matters to me.

Finally, it is important to note that while Apple did say that the new HomePod Mini would support some third-party music services, it doesn’t include currently offer support for Spotify. For many, that will be a dealbreaker, and I hope it is a fix Apple can make soon after launch.

Still a Growing Market
While Apple certainly faces some serious challenges in the smart speaker market, the HomePod mini’s introduction puts it in a much better competitive position. And its ability to leverage the iPhone to drive interactive experiences with the speaker could be a difference-maker for many. If the company can better leverage its HomeKit capabilities to make its smart speaker a more capable home automation hub, that should resonate with many people, too. Finally, there are undoubtedly plenty of Apple customers who have waited on the smart home sidelines for the company to field something more competitive before jumping in.

In fact, while we’ve seen the smart speaker category expand at a very rapid pace in the last few years, we still plenty of growth in the coming years. According to IDC’s Smart Home Tracker, smart speaker volumes will grow at a double-digit pace next year, pushing toward 160M units worldwide. With the new HomePod mini, I expect Apple will grab a more significant share of that pie. To do so, however, the company must keep pushing. In addition to continued work on Siri and the inclusion of Spotify, one other thing I’d like to see Apple do is to iterate faster in hardware. It announced its original HomePod way back in 2017 (and launched it in early 2018). This market—and its competitors—are evolving too fast to wait years between product announcements.

Podcast: Apple iPhone 12, US 5G Networks, HomePod Mini

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the news from Apple’s big launch event with a detailed analysis of the latest iPhone’s new features, its 5G support, and its impact and opportunities for US 5G carriers including Verizon, T-Mobile and AT&T.

The iPhone 12 Family Future-Proofs Apple’s Lineup

At the “Hi, Speed” event Apple aired on Tuesday, it was all about the iPhone. Of course, it was about the iPhone 12 new models, all four of them, but the iPhone was front and center even as Apple introduced the new HomePod Mini and Apple’s vision of the smart home.

Many have covered all the speeds and feeds of the different models, so I will not spend time doing that. If you have missed something, you can easily find a model comparison on the Apple website. What I want to spend time on are a few key bigger picture points that help to position the new models in the market and broader Apple context.

The Super-cycle

I was asked several times whether Apple would see a super-cycle with the iPhone 12, and the answer I gave was honest but not very helpful: it’s complicated. All things equal, Apple has the perfect product lineup: new design, four products that span a wide enough price range, and a new technology, 5G.

However, the reality is that the iPhone 12 models are hitting the market during an economic downturn and, in the US, a time of considerable uncertainty. There is a high degree of reliance on technology that we all have experienced during the pandemic, which might counter this market negativity by encouraging an upgrade cycle for the device that we still all turn to the most: the smartphone.

On Apple’s side, there is a user base with the largest proportion of users falling into higher-income brackets, a factor that will soften the impact of the economic downturn. Outside the early-adopter group looking for the iPhone 12 Pro and iPhone 12 Pro Max, and likely to jump on the new products as soon as they are available, we might see a more spread out cycle. This is because different countries are opening up more while others are fighting the threat of a second wave of the pandemic.

Three LTE models remain in the portfolio, iPhone SE, iPhone XR, and iPhone 11, which will continue to drive some decent volume for Apple both from users who are upgrading from older models as well as for users and markets moving to 5G more slowly. Being a year old or less, combined with the value Apple always provides through software updates, gives buyers choosing these models confidence in their purchase.

5G

Apple had a slide during the event that said “5G just got real.” Depending on where you sit on the 5G hype cycle, you either think that 5G has been real for over a year now or, that we are still all waiting for it to be real. As it’s often the case, the truth is somewhere in between. 5G has been available in many markets for a while, but networks’ coverage and performance still leave much to be desired.

I did not think Apple would make a big deal out of 5G, and by and large, that was the case. Tim Cook reminded us of the privacy and security benefits of 5G over Wi-fi. The rest of the time dedicated to the topic was spent explaining how Apple differs in its implementation of 5G.

Apple made a couple of interesting decisions given when they are joining the 5G party. First, as expected by most, the full iPhone 12 family is 5G. Apple highlighted its space efficiency, which allows enough room for multi bands support.
In the US, the entire iPhone 12 family supports mmWave rather than just having one Verizon model. This helps Apple with economies of scale and better production and inventory management at a time when the pandemic is making it harder to forecast. With mmWave support spread across the family, the premium that we have seen other manufacturers put on the Verizon skew is less evident but might become more apparent as we see pricing in other regions.

The second interesting decision was to apply AI to 5G through “smart data mode.” This means that the iPhone smartly decides when to use 5G or LTE based on speeds and use cases. When your iPhone doesn’t need 5G speeds, it automatically uses LTE to save battery. But when 5G speeds matter, the iPhone 12 starts using it. Apple also delicately pointed out that users will experience different 5G speeds based on where they are located.

iPhone SE vs. iPhone Mini

When the iPhone SE was launched in April 2020, I wrote:

“The iPhone SE feels like a different kind of product, though. It is not a model we should expect to be refreshed with the regular cadence we see in the rest of the portfolio. Instead, it’s a product that serves the purpose of getting the most pragmatic users to upgrade after holding on to their phones for years. These users might be coming from a hand-me-down or a secondhand iPhone or even be Android users looking for their first iPhone… If I had to guess when a good time for the next refresh of the iPhone SE might be, I would say that in another four years sounds like a good time considering that by then, 5G will be truly mass market.”

After this week, I have started to change my mind, and I am guessing that the iPhone SE we have in the lineup today might be the last model that will bear the name. Going forward, the iPhone Mini will reflect a more compact form factor with all the essential features and a lower price point. Next year as the iPhone 13 Mini is introduced, I would expect to see the iPhone 12 Mini hitting a price point much closer to the iPhone SE, which will probably remain in the portfolio perhaps till 2022.

iPhone 12 Pro Max vs. iPhone 12 Pro

At the other end of the iPhone 12 Mini, we find the iPhone 12 Pro and iPhone 12 Pro Max. After a couple of years of providing parity of features but a difference in size in the high-end models, Apple returned to making the largest model the one with the best camera system. When Apple first did this with the iPhone 6Plus, they were unprepared by how much the mix skewed towards the larger size. Users were prepared to buy a larger device to take advantage of the superior camera system.

Both Phone 12 Pro models offer the ability to shoot in RAW format, meaning that the user can manually make the photo look its best rather than having the iPhone automatically do it for them. For video, both Pro models support HDR video with Dolby Vision, up to 60 fps, and even better video stabilization. The iPhone 12 Pro Max takes the pro camera experience even further with a new ƒ/1.6 aperture Wide camera with a 47 percent larger sensor delivering an 87 percent improvement in low-light conditions. It also includes the expansive ultra-wide camera and a 65 mm focal length Telephoto camera for increased flexibility with closer shots and tighter crops. Combined, this system offers a 5x optical zoom range.
The iPhone 12 Pro Max impact might end up being quite different from what we saw with the iPhone 6 Plus. The iPhone 12 Pro Max might not end up skewing sales volume, but it might certainly capture new generators of creators, a segment that Apple has always cared a great deal about and that over the years have become more and more mobile-focused.

Much like many users buy into 5G to future-proof their smartphone purchase for a few years, it seems to me that Apple has used 5G to rethink its iPhone portfolio setting it up in a way that makes it easier for buyers to pick the right product for them. I will be sharing more about the iPhone 12 cameras, the new MagSafe wireless charging, and 5G performance as I test the devices between October 23 and November 13. With no MacBooks being announced at the “hi, Speed” event, I am also guessing we will have “one more thing” from Apple before the end of the year.

Podcast: Nvidia GTC, Arm DevSummit, Google Workspace, AMD Ryzen 3, Big Tech Antitrust

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news from Nvidia’s GTC Conference and Arm’s Developer Summit, as well as the potential impact of a merger of those two companies, discussing the latest version of Google’s productivity suite, chatting about the latest desktop CPU introductions from AMD, and pondering the potential impact of the US government’s huge new report on potential antitrust concerns with Amazon, Apple, Facebook and Google.

Google Workspace Elevates Collaboration by Focusing on the Task at Hand

In July, Google gave us a taste for a more integrated collaboration experience when it brought Meet and Chat into Gmail. This week the metamorphosis continued as G Suite becomes Google Workspace. Back in July, I looked at the news from a communication vs. collaboration perspective, making the point that communication is really at the center of  Google’s collaboration strategy. However, this week, as Google talked more about Workspace and how it plans to deliver a more integrated experience across collaboration and communication, it seemed that content is at the center of the Workspace and att the center of collaboration.

With Workspace, Google is addressing some specific issues. First, the fact that due to Covid-19 collaboration has changed. Of course, we have all experienced that in one way or another and when we move past the hours of video analysis one aspect that really has changed is that you are likely to be collaborating more with people outside your organization. This is because those face to face meetings which would have been independent of the work to be done are now happening online. This means that the tools must be more flexible when it comes to sharing information and collaborating than they might have been before.

Second, Google wanted to address first-line workers by making them better connected with their own organization which is likely not in the same place as they are. This goal of connecting people to get the job done extends to reaching consumers, the final user of whatever product businesses are trying to sell. By doing so, of course, Google bridges the consumer and the business world by giving people (2.6 billion MAU) tools they are already very familiar with.

While we will eventually go back to an office, we will face a more heterogeneous work environment we did before the pandemic. This means that connecting people who are not physically together in a rich but simple way will remain a priority for a long time.

Workspace also builds on Google’s focus on delivering helpful technology. This has become the big theme that brings together the different areas of the company from the CEO Sunday Pichai to the Head of Made by Google Rich Osterloh. Helpfulness in this case comes from a simplified user experience by also from the intelligence that Google is able to offer on top. One of the features that better explain this balance between simplicity and richness is “picture in picture” which gives you the ability to hear and see people you are collaborating live with on a document. This featured was launched in July for Gmail and Chat and it will soon also include Docs, Sheets, and Slides.

Not Just a Rebranding

I am sure, given there isn’t an actual new product, one could be tempted to see this as a rebranding exercise. But the new features that have been introduced and the way all the tools come together really speak to the direction Google is taking with productivity and the G Suite name no longer fitted with that vision. Workspace is no longer a bunch of apps that take care of the different tasks you need to perform, it’s an orchestrated experience empowered by AI. I wish there was a different name than Workspace, to be given to this hub especially considering consumers will have access to it as well. The name is often seen as a location, a landing place, which, although technically correct, fails to convey the active lift these tools deliver. Yet, it is certainly a better illustration of the experience Google wants to deliver compared to G Suite.

Google also introduced a new pricing plan that adds one level for medium businesses and now takes Workspace from small businesses all the way to large enterprises in a more granular way. Mostly the difference in price accounts for the number of users, cloud storage size, and security features. Not much has changed with the first two levels, Basic and Business still priced at $6/user and $12/user. The new tier called Business Plus edition offers enhanced capabilities for $18/user to those organizations that might be large in size (up to 250 users) but don’t need the entire enterprise-level offerings. I would argue this last one is the category where Google has a lot of opportunities and where organizations might have struggled in the past to feel their needs were properly addressed.

Betting on Content Rather than Meetings

There were two aspects of the announcement that I found particularly clever and I want to highlight.

One is that Google does provide the option not to sign up for the whole experience. Thanks to Workspace Essentials ($8/active user), which lets a business get started with video and collaboration without having to replace their current email or calendar systems. This can lower the barrier of entry considerably if you think about how much more effort and disruption migrating mail and calendar system represents for an organization. It might also help Google entering businesses dominated by Microsoft with a “land and expand” tactic. Our data points to a lot of crossover within organizations between Office 365 and Google Workspace especially for Docs that remain the preferred tool for collaboration. This is not a new offering but it is certainly one that has become much more relevant in the current environment when digital transformation is accelerating but also when IT professionals are already extremely stretched.

The second aspect that I believe will give Google an advantage long term is centering the Workspace experience on the content to be created or the task to be completed rather than the way in which one will do so. There are many collaboration hubs offered by the likes of VMWare, Citrix, and of course, Microsoft. Office 365’s Workspace equivalent is Teams where users are led to choose how they work together first. In other words, I get to Microsoft Teams to do a video call or a chat while I get to Google Workspace document or email and then decide how I communicate with others bout the task at hand. Albeit subtle, I think the approach Google is taking might better withstand the return to the office and a shift back towards face-to-face meetings when we will eventually be able to do so.

It will be fascinating to see how all these different hubs will drive value to users. Locking people in is never a good approach. Creating different points of entry and delivering value across the board will ultimately determine the success across a workforce that is probably the most varied in both age and skills than it has ever been.

Microsoft Expands Surface Lineup; Announces Updates to Windows on Arm

This week the Surface Team at Microsoft announced a new, more affordable notebook called the Laptop Go and an update to the Qualcomm-based Surface Pro X. The former fills a gap in the company’s lineup and is going to appeal to many buyers. The latter reaffirms Surface’s commitment to Windows on Arm and will take advantage of several software updates the company also announced this week, including new and updated native apps and new 64-bit x86 emulation capabilities.

Expanding the Lineup
The Surface team has had a monumental year and a very busy Fall. Back in September, they announced the Surface Duo, its category-launching, Android-running, dual-screen having re-entry into the don’t call it a smartphone, smartphone. I’ve had the privilege of using that product at length, and it’s made a believer out of me in terms of the utility of a two-screen mobile device. Similarly, last year’s Surface Pro X helped cement my long-held belief that there is a place in the market for Windows on Arm. This week’s new hardware does not usher in any new categories but is monumental, nonetheless.

The Surface Laptop Go starts at $550 and includes a 12.4-inch touchscreen display, an Intel 10th Gen i5 processor, up to 16GB of RAM, and 256GB of storage. It comes in three colors (Ice Blue, Sandstone, and Platinum) and supports One Touch sign-in through Windows Hello and a Fingerprint Power Button. I’ve long been a big fan of the design of the existing Surface Laptop, now on version 3, which starts at about $960. The new Laptop Go brings many of those same design sensibilities to a lower price point. And like the Surface Go detachable—which brought to market a lower-priced Surface tablet—I expect the Surface laptop to appeal to a wide range of buyers and to ship in notable volumes.

With the Laptop Go, Microsoft is attempting to bring the Surface brand downward into the mid-priced market without tarnishing its premium status. It pulled it off with the Surface Go, now in version 2, and starting at $399, and I expect it to do it here, too. We watched Apple do something similar with the launch of its $399 iPhone SE, which I suggested might just be its most important iPhone launch of the year.

Microsoft’s timing of the Surface Laptop launch bodes well, too. As I noted earlier this year, as COVID-19 swept the globe, the first technology product many people, companies, and schools moved to purchase was the PC. That resulted in a blockbuster second quarter, and all signs point to the just-completed third quarter as being similarly robust. We should see those strong volumes carry into the holiday quarter, even as the world continues to contend with the pandemic and the resulting economic hardships. All told, it seems a very sensible time to launch a well-designed but affordable notebook product into the market.

Focus on Windows on Arm
In addition to the Surface Laptop Go, Microsoft also announced an update to its flagship tablet, the Surface Pro X. The first Pro X launched last October, leveraging a custom processor Microsoft partnered with Qualcomm to design called the SQ1. This year’s refresh offers an update to that processor, called the SQ2, as well as a new platinum exterior finish option. The new device slots in at the top of the Surface Pro X lineup with a starting price of $1,500, while the existing Matte Black version 1 with the SQ1 chip remains in the lineup with a starting price of $1,000.

The updates to the Surface Pro X line will appeal to buyers looking for the best performance they can get from an Arm-based Surface. But perhaps more important was the news that dropped just before the Surface announcement, which was that Microsoft was bringing a host of improvements to the broader Windows on Arm platform and its supporting apps. Chief among them: Plans for an updated version of the Edge browser it promises will run faster and use less battery and a new native Microsoft Teams application. Finally, starting in November, it will roll out support for 64-bit x86 emulation to the Windows Insider Program. That last part is incredibly important, as to date the platform has only supported emulation of 32-bit Windows apps. That lack of 64-bit X86 emulation left out many modern desktop apps. This support, which will roll out widely next year, could be a game-changer for the platform if the performance of those apps is good.

With the updated Surface Pro X and the upcoming enhancements to the Windows on Arm platform, Microsoft clearly affirms its plans to support the platform going forward. To date, industry support for Windows on ARM has been tepid at best, with Lenovo being the only other major PC OEM to consistently ship products utilizing Qualcomm’s newest PC parts. However, as Apple moves to ship its first Macs using Arm-based Apple Silicon later this year, I can tell you that the broader PC industry is watching closely. With these improvements to the Windows on Arm platform, Microsoft makes it much more compelling for these OEMs to move to support it with new products down the road.

And those same OEMs will also be watching closely to see how both the updated Surface Pro X and the new Surface Go Laptop perform in the market during the all-important holiday quarter. Over the years, Microsoft has built up an impressive portfolio of devices that spans form factors, technologies, and price points, silencing anyone who doubted the company’s commitment to hardware. With these latest products, Microsoft expands its lineup again, positioning Surface for a big holiday quarter and continued growth into the new year.

Whose Home Is It? Amazon? Google? Apple?

In the tech world, September has been synonymous with Apple for many years. Over the past couple of years, however, Amazon has also started to claim the month as its big device reveal. Last year Amazon introduced fifteen new products with Alexa integration and focused on privacy. This year the number of devices was a little lower, and together with privacy, the focus was on sustainability. But that was last week! This week it was Google’s turn to announce a new smartphone, Pixel 5, as well as new smart speakers and Google TV.

Apple’s iPhone event is rumored for mid-October, and the same sources expect the launch of an updated HomePod and an HomePod Mini. So, it might seem unfair to attempt to name who is winning the control of the home before then. Yet what we have seen from Amazon and Google shows enough of a difference in approach to at least attempt to highlight what it would take for Apple to be considered at the same level.

The Core Business

I said before that looking at a company’s core business model will shed light on many of its products and business decisions.  If your business is built on selling hardware, it is more likely that you focus on individuals. If your business is centered on advertising, you are also likely to focus on individuals.

As Prime services started to expand, Amazon has been able to cater to both the individual and the household. This approach seems to be working and can grow to the car, which Amazon clearly sees as an extension of the home. Within a home context, Alexa remains extremely useful, especially since that individual users can choose to register their voice so that Alexa knows who is interacting with it. Alexa can now even join multiple people in a conversation.  The car is no different from the home. The interactions we are likely to have are related to content, navigation, or the car itself, making it reasonably straightforward to deliver value. Out in the world, Amazon and Alexa quickly lose impact as users turn to their phones’ assistant. The recent move into wearables with earbuds, glasses, and fitness bands all aim at addressing this weakness by offering Amazon valuable ways to collect information and provide value when out and about.

As Amazon moves beyond content into services such as home security, health, and even personal care, it starts to lock people into services that are more about delivering a better quality of life, in one way or another, rather than entertainment. This is a similar formula Apple has used for Apple Watch. Providing health benefits has considerably increased the appeal of the device. Apple will continue to find ways to use this aspect of Apple Watch to open the door into its ecosystem as it did with the recently launched Family Setup. Monetization for Apple will come from hardware; however, while for Amazon, the opportunity rests in the services, not the hardware.

Google falls somewhere in between Amazon and Apple by delivering good value for money hardware and services. But the real value is not in the services per se, but the AI, often associated with Google Assistant, that delivers helpfulness. This is possibly Apple’s weakest link both against Amazon and Google. Siri has improved, and it can provide moments of delight like reading your messages when you are wearing AirPods. Yet, across the board, Siri lags both Alexa and Google Assistant. Value is also dependent on what kind of services Apple will embrace. While I expect Apple to continue to get deeper into health and content, I do not expect it to have a wide range of services or penetrate the home with the number of devices we will continue to see from Amazon.

The Phone as an Entry Point to the Home

There are different paths to our homes, but the phone remains the strongest because it is easier to drive value from the outside than the other way around. This is especially true about digital assistants, and while it is not the case right now, most of us spend more time away from their homes. A digital assistant that can always be with me will just be smarter because it will know more about me. The phone is the significant advantage Google and Apple have over Amazon. The phone is the device we interact with the most, creating stickiness and generating data these companies can use to improve my experience with both the device and their services. The phone’s importance does not mean that Amazon cannot play a substantial role in the home. It has clearly demonstrated the opposite thus far. However, it means that Amazon will have to have a broader set of devices to capture our time and interactions and find different hooks for users.

As critical as the phone is to get into the home, more is needed to be useful in a family context, and this is where Apple has to cover more ground. The expansion of Apple TV as an app is helping content. More HomePod models will help music, podcasts, and smart home. HomeKit support on other brands will not be enough for users to connect the dots between the device they are using and Apple. As unsexy as it is, home networking seems an obvious opportunity for Apple considering the firm stand on privacy. I am not sure that when Apple exited that business, it was clear that delivering a networking solution meant providing a digital security and privacy solution not for the home but the family. It might also be interesting to see if Apple believes that delivering more screens to the home should only be done through iPads rather than through a HomePod with a screen.

They say there is strength in numbers. It seems to me that when it comes to the home, strength is in the meaningful interactions a brand can deliver. Whether those interactions are through hardware, software, or services does not matter as long as the user is crystal clear on who is bringing them value.

Podcast: Microsoft Ignite, AMD, Arm, Qualcomm Semiconductors, Amazon Fall Product Blitz

This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell analyzing the news from Microsoft’s Ignite event, chatting about multiple semiconductor chip and IP announcements from AMD, Arm and Qualcomm, and discussing the large range of products and services that Amazon unveiled.

The Future of Work Calls for Employees’ Wellbeing

Close to six months into different levels of sheltering in place, most organizations have been shifting their focus from temporary measures to supporting working from home long term, whether fully remote or hybrid work. A lot must come into play when working or learning remotely: connectivity, security, device deployment, and management, but nothing has been talked about more than collaboration tools. Maybe this is because collaboration involves both work and productivity and a natural need for human interactions. Technology providers that are active in the modern-work space have been adding features and intelligence that make collaboration easier and effective as well as more natural and less mentally and emotionally tasking.

As Microsoft kicked off their Microsoft Ignite conference this week, CEO Satya Nadella spoke of the foundations that technology must build on. There are four: inclusivity, trust, equity, and sustainability. Nadella also highlighted that when it comes to modern work, collaboration does not start and end in a meeting, and organizations should focus on continued learning and the wellbeing of their employees.

Corporate VP of Microsoft 365, Jared Spataro, built on that idea by highlighting the importance of not focusing on short-term productivity from treating employees like machines.

Microsoft already announced back in July some features focused on making collaboration easier like Together Mode in Microsoft Teams, but at Ignite, Spataro talked about the responsibility Microsoft feels to study the challenges and opportunities that remote work and hybrid work are presenting organizations across the world. To do so, Microsoft is relying on its own telemetry and first-party research as well as commissioning research from partners, including the likes of Harvard, Stanford universities. Finally, Microsoft will depend on their daily conversations with customers.

So far, Microsoft has learned from its Work Trend Index that people experience more flexibility and greater empathy for team members. 62% of people surveyed said they feel more empathetic toward colleagues, mostly because they connect with them in their homes. That said, there are concerns about a loss of connection and feelings of isolation. The lack of clear boundaries results in people working longer hours with growing fears of burnout, especially among information workers and first-line workers. Microsoft Teams usage gives a sense of the new work practices with hours extended from the typical nine to five, including weekends. Workday length increased 17% in Japan, 25% in the US, and 45% in Australia.

According to Spataro, business leaders went from worrying about employees’ ability to be productive while working remotely to worrying about whether people are working sustainably and healthily.

All this information has pushed Microsoft to double down on using technology to empower every person and every organization to achieve more, but doing so sustainably in all the ways possible.

In the new year, Microsoft Teams will add the option to schedule a virtual commute to allow for that time many used in the morning to get mentally ready for their day and decompress on the way home at the end of the day. Microsoft’s global study also showed that 70% of people think meditating could decrease their work-related stress. To help with that, Microsoft announced the integration of the meditation app Headspace right into Microsoft Teams. This is an interesting idea and not that different from some applications that wearables brands like Apple and Samsung have already added to their devices. The interesting part for me will be to see how organizations will communicate these features to employees as part of a broader effort to improve working conditions or a tool they see as ticking the box and getting them out of delivering any other support.

Data is helping Microsoft, but it is also helping organizations through the Microsoft Graph. From October, Microsoft will help managers through the integration of Workplace Analytics into Microsoft Teams. Managers will have the ability to analyze teamwork, after-hour collaboration, meeting effectiveness, and focus time. The integration with Microsoft Teams will allow to schedule unplug reminders, monitor the number of meetings to avoid burnout, and even check in with the employees on how they feel on a particular day. Balancing trying to help achieve a more balanced work practice with the feeling of being continuously monitored will rest mostly on how these tools are used, managers’ transparency, and the trust they built with their employees. Suggesting time to unplug when setting unreasonable deadlines will only add to the frustration and ultimately achieve the opposite result, with employees not feeling valued and cared for.

Employees are not the only ones coming to terms with this new way of working. Organizations are facing a similar learning curve. To build a more resilient business, Microsoft created a Productivity Score that will help organizations understand how they work and how they function. The Score covers five categories content collaboration, meetings, communication, teamwork, and mobility through tools such as the creation of workspaces and industry best practices on meetings effectiveness. The data across the organization can get down to the individual level or be entirely anonymized.

Microsoft is very clear that the Productivity Score is not designed as a tool to monitor employees and their output. To limit the risk that organizations could use the data in such a way, Microsoft only provides data as an aggregate over a 28-day period. The Productivity Score is about measuring the efficacy of productivity tools, and technology employees are using. It is the technology that is getting assessed, not humans.

Despite all these warnings, once again, there is a concern that organizations might just inappropriately use these tools because, sadly, it would not be the first time. Today, we have apps like Sneek, that takes photos of employees to see if they are at their desks. Meanwhile, project management programs such as Jira and Basecamp can allow managers to spot when workers are not maintaining a high level of output. The reality is that, while we have all proven we can be productive working from home, trust, often, is yet to be proven.