Dolby Brings a New Dimension to Home Entertainment

on November 14, 2018
Reading Time: 5 minutes

Consumers are very familiar with the Dolby brand. Whether you often visit a Dolby Cinema or you have a TV or computer that supports Dolby Atmos and Dolby Vision you know Dolby delivers one of the best entertainment experiences that allow you to lose yourself in the content you are consuming.

At a time when delivering an experience has more and more to do with the combination of hardware, software and AI, Dolby brings to market its first consumer device: a set of wireless headphones called Dolby Dimension.

Making Hardware Does not Make You a Hardware Maker

It is always easy when a brand brings to market a product in a category they had not been present before to think of it as “entering the market.” Of course, technically this is what they are doing. But there are different reasons why a brand decides to get into a new space. Potential revenue is mostly at the core of such a move, but even then how that revenue is generated differs. Sometimes revenue comes directly from the new product. Other times, the revenue upside comes from how the product is able to boost brand perception in areas the name was already present.

When Dolby spoke to me about Dolby Dimension, I thought about how well it fits their history and DNA as well as delivering on a market need. To understand why Dolby is taking this step one should take a quick look at how home entertainment is changing.

In a recent study across 2044 consumers by the Hollywood Reporter it is clear that in the US, binge-watching is becoming the norm and not just for millennials. While 76% of TV watchers aged 18 to 29 said, they preferred bingeing, older age brackets are not far behind with 65% of viewers ages 30 to 44, and 50% of 44 to 54 who prefer binging. And it is not just about how many people binge-watch it is also how often they do so. Across the national sample of the October study, 15% say that they binge-watch on a daily basis. Another 28% say they binge several times per week.

Many will argue that the wireless headphones market is already super competitive and that Bose fully controls the high-end of the market, so Dolby should have thought about it twice before entering this space. But see, this is where the “entering this space” debate starts. From how I look at it, Dolby was looking to expand the way their technology and experience can be experienced. This took the form of a set of headphones that bring value to a specific set of consumers who appreciate high-quality sound, spend hours watching content on whatever screen is most convenient in their home and see the $599 price tag as an investment in a superior experience that allows them to binge smarter.

It is when you look at the technology packed inside Dolby Dimension and the specific use cases that Dolby has in mind that you understand why this is not a simple branding exercise. The initial limited availability to the US market and distribution focused on dolby.com confirm to me that Dolby is not interested in a broader consumer hardware play, which I am sure will leave hardware partners to exhale a sigh of relief.

Not Just Another Set of Wireless Headphones

Most wireless headphones are designed today for users on the go. They help you being immersed in your content or your work by isolating you from the world around you thanks to noise canceling.

There are some models in the market, the latest one being the Surface Headphones, that allow you to adjust your voice canceling feature to let some of the world in if you need to. This is however done manually.

Dolby Dimension is designed with home use in mind which means that a few things are different. First, the new Dolby LifeMix technology allows you to dynamically adjust how much of the outside world you can let it. Settings, activated through touch controls, will enable you to find what Dolby calls the “perfect blend” between your entertainment and your world as well as entirely shutting down the outside world through Active Noise Cancelling. If you, like me, binge-watch in bed at night you might appreciate being able to choose between being fully immersed in your content when your other half falls asleep before you and snoring gets in the way. Other times, you might want to be able to hear your daughter giggling away next door because she decided to ignore your multiple lights off requests!

Over the days I had to play with Dolby Dimension what most impressed is how it really gives you the feeling of sitting in a theatre. This is especially striking when you are watching content on a small screen like a phone or a tablet. The sound, which of course Dolby will tell you is half the experience, really brings that little screen to life letting you enjoy content at its best. I felt so immersed in what I was watching that I am pretty sure I got to experience the kind of “mom’s voice canceling” my kid has naturally built into her when she is watching any of the Avengers movies, or she is gaming!

There are a few more details that highlight what Dolby had in mind with these headphones. Dolby Dimension can be paired with up to eight devices, and you can quickly toggle between your favorite three with dedicated hardware keys on the right ear cup. When you pick your device, hitting the key associated to it will take you straight to your entertainment app of choice like Netflix or Hulu, not just your device.

Battery life reflects a best-sound approach by delivering up to 10 hours with LifeMix and Virtualization turned on and up to 15 hours with low power mode. So whether you, like 28% of the study sample, binge-watch two to three episode per session or like another 21% you watch four episodes at once you will be left with plenty of power. While we might be tempted to think about a long flight or a day at the office, this is not what Dolby Dimension was designed for and to be honest if those are your primary use cases Dolby Dimension is not really for you.

Headphones are Key to the Experience

It is fascinating how over the past year, or so, headphones have become a talking point in tech. I think the last time that was the case was when Bluetooth was introduced and we got excited about being able to have a conversation on the phone without holding the phone.

When we are discussing the lack of the audio jack from our devices or which digital assistant is supported (assistant that you can summon with Dolby Dimension) we are pointing to the fact that headphones have become an essential part of our experience. Considering how much time we spend in front of one screen or another, both at home or on the go, being able to enjoy both visual and audio content is growing in importance. As intelligence gets embedded in more and more devices and smaller and smaller devices benefit from higher processing power, headphones can become a device in their own right rather than being viewed merely as an accessory.

While I don’t believe Dolby is interested in becoming a consumer hardware company, I am convinced they will continue to innovate and look at how consumers habits are changing when it comes to consuming content. As we move from physical screens to augmented reality experiences and eventually virtual ones, Dolby might continue to take us on a sensory journey through technology and if needed hardware.

Chiplets to Drive the Future of Semis

on November 13, 2018
Reading Time: 4 minutes

A classic way for engineers to solve a particularly vexing technical problem is to move things in a completely different direction—typically by “thinking outside the box.” Such is the case with challenges facing the semiconductor industry. With the laws of physics quickly closing in on them, the traditional brute force means of maintaining Moore’s Law, by shrinking the size of transistors, is quickly coming to an end. Whether things stall at the current 7nm (nanometer) size, drop down to 5nm, or at best, reach 4nm, the reality of a nearly insurmountable wall is fast approaching today’s leading vendors.

As a result, semiconductor companies are having to develop different ways to keep the essential performance progress they need moving in a positive direction. One of the most compelling ideas, chiplets, isn’t a terribly new one, but it’s being deployed in interesting new ways. Chiplets are key IP blocks taken from a more complete chip design that are broken out on their own and then connected together with clever new packaging and interconnect technologies. Basically, it’s a new version of an SoC (system on chip), which combined various pieces of independent silicon onto a multi-chip module (MCM) to provide a complete solution.

So, for example, a modern CPU typically includes the main compute engine, a memory controller for connecting to main system memory, an I/O hub for talking to other peripherals, and several other different elements. In the world of chiplets, some of these elements can be broken back out into separate parts (essentially reversing the integration trend that has fueled semiconductor advances for such a long time), optimized for their own best performance (and for their own best manufacturing node size), and then connected back together in Lego block-type fashion.

While that may seem a bit counter-intuitive compared to typical semiconductor industry trends, chiplet designs help address several issues that have arisen as a result of traditional advances. First, while integration of multiple components into a single chip arguably makes things simpler, the truth is that today’s chips have become both enormously complex and quite large as a result. Ensuring high-quality, defect-free manufacturing of these large, complex chips—especially while you’re trying to reduce transistor size at the same time—has proven to be an overwhelming challenge. That’s one of the key reasons why we’ve seen delays or even cancellations of moves to current 10nm and 7nm production from many major chip foundries.

Second, it turns out not every type of chip element actually benefits from smaller sizes. The basic argument for shrinking transistors is to reduce costs, reduce power consumption, and improve performance. With elements like the analog circuitry in I/O components, however, it turns out there’s a point of diminishing returns where smaller transistors are actually more expensive and don’t get the performance benefits you might expect from smaller production geometries. As a result, it just doesn’t make sense to try and move current monolithic chip designs to these smaller sizes.

Finally, some of the more interesting advancements in the semiconductor world are now occurring in interconnect and packaging technologies. From the 3D stacking of components being used to increase the capacity of flash memory chips, to the high-speed interfaces being developed to enable both high-speed on-chip and chip-to-chip communications, the need to keep all the critical components of a chip design at the same process level are simply going away. Instead, companies are focusing on creating clever new ways to interconnect IP blocks/components in order to achieve the performance enhancements they used to only be able to get through traditional Moore’s Law transistor shrinks.

AMD, for example, has made its Infinity Fabric interconnect technology a critical part of its Zen CPU designs, and at last week’s 7nm event, the company highlighted how they’ve extended it to their new data center-focused CPUs and new GPUs now as well. The next generation Epyc server CPU, codenamed “Rome,” scheduled for release in 2019, leverages up to 8 separate Zen2-based CPU chiplets interconnected over their latest generation Infinity Fabric to provide 64 cores in a single SoC. The result, they claim, is performance in a single socket server that can beat Intel’s current best two-socket server CPU configuration.

In addition, AMD highlighted how its new 7nm data center-focused Radeon Instinct GPU designs can now also be connected over Infinity Fabric both for GPU-to-GPU connections as well as for faster CPU-to-GPU connections (similar to Nvidia’s existing NVLink protocol), which could prove to be very important for advanced workloads like AI training, supercomputing, and more.

Interestingly, AMD and Intel worked together on a combined CPU/GPU part earlier this year that leveraged a slightly different interconnect technology but allowed them to put an Intel CPU together with a discrete AMD Radeon GPU (for high-powered PCs like the Dell XPS15 and HP 15” Spectre X360) onto a single chip.

Semiconductor IP creator Arm has been enabling an architecture for chiplet-like mobile SoC designs with its CCI (Cache Coherent Interconnect) technology for several years now. In fact, companies like Apple and Qualcomm use that type of technology for their A-Series and Snapdragon series chips, respectively.

Intel, for its part, is also planning to leverage chiplet technology for future designs. Though specific details are still to come, the company has discussed not just CPU-to-CPU connections, but also being able to integrate high-speed links with other chip IP blocks, such as Nervana AI accelerators, FPGAs and more.

In fact, the whole future of semiconductor design could be revolutionized by standardized, high-speed interconnections among various different chip components (each of which may be produced with different transistor sizes). Imagine, for example, the possibility of more specialized accelerators being developed by small innovative semiconductor companies for a variety of different applications and then integrated into final system designs that incorporate the main CPUs or GPUs from larger players, like Intel, AMD, or Nvidia.

Unfortunately, right now, a single industry standard for chiplet interconnect doesn’t exist—in the near term we may see individual companies choose to license their specific implementations to specific partners—but there’s likely to be pressure to create that standard in the future. There are several tech standards for chip-related interconnect, including CCIX (Cache Coherent Interconnect for Accelerators), which builds on the PCIe 4.0 standard, and the system-level Gen-Z standard, but nothing that all the critical players in the semiconductor ecosystem have completely embraced. In addition, standards need to be developed as to how different chiplets can be pieced together and manufactured in a consistent way.

Exactly how the advancements in chiplets and associated technologies relate to the ability to maintain traditional Moore’s law metrics isn’t entirely clear right now, but what is clear is that the semiconductor industry isn’t letting potential roadblocks stop it from making important new advances that will keep the tech industry evolving for some time to come.

Podcast: Samsung Developer Conference, AMD 7nm, Google Policies

on November 10, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Samsung’s Developer Conference and the announcements around their Bixby assistant platform and Infinity Flex foldable smartphone display, AMD’s unveiling of their 7nm Epyc CPU and Instinct GPU for the cloud and datacenter market, and Google’s recent internal policy changes on harassment and other issues.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

News that Caught My Eye: Week of November 9, 2018

on November 9, 2018
Reading Time: 5 minutes

One big eye catcher this week…

Samsung’s Developer Conference

Key news from SDC18 included:

  • New mobile experiences with Infinity Flex Display & One UI: The Infinity Flex Display builds on Samsung’s legacy of category-defining form factor and display innovation, paving the way for a breakthrough foldable smartphone form factor. One UI is Samsung’s new smartphone interface, featuring a clean, minimal design that makes it more convenient for one-handed use.
  • The new Bixby Developer Studio and Bixby Capsules:  Bixby is evolving into a scalable AI platform to support diverse products and services. The company unveiled new ways for developers to easily bring voice to their services.
  • Works with SmartThings certification program: A re-designed suite of tools that make it faster and easier for developers to connect their devices and services to the SmartThings platform.

Despite the Rise of Voice, Screens Still Matter

on November 9, 2018
Reading Time: 4 minutes

Over the last few years, a great deal of discussion has focused on the importance of voice as the next big interface. The explosive growth of smart assistants, in everything from speakers to thermostats to kitchen appliances, bolsters the argument that voice is going to play a big roll in how we interact with technology going forward. But throughout the last few weeks, I’ve been struck by just how much attention good old-fashioned screens have been getting in new product announcements. From Apple’s recent launch of new, larger-screened iPhones, iPads, and Apple Watches, to new smart assistants with screens, to Samsung’s teasing debut of its first foldable smartphone, the fact is screens still matter a great deal.

Bigger is Better
I’ve been living with Apple’s new Series 4 watch and the iPhone XS Max and have been struck by just how impactful the new, larger screens has been to how I use the devices. On the watch, I’m using the Infograph watch face that puts a huge amount of information on screen. While I look forward to Apple adding more customizable complications to that watch face (I’d like to have access to messages there), the information density is amazing, making it possible for me to easily access most of the apps I use daily right on the home screen. And the larger screen has led to another usage change: I find myself consuming more content on the watch. No, I’m not going to read a 2000-word news story there, but the larger size makes it comfortable enough to read messages, short emails, and news alerts right on the watch instead of shifting over to the phone.

The huge screen on the iPhone XS Max has led to an evolution in my usage of that device, too. It’s large enough that I find myself perfectly content to consume more content on it than any phone before it. Traditionally, when I get home, I move from the phone to the tablet, but with this phone, I will often make it well into the evening before I think to make the switch. Which begged the question: With the iPhone getting bigger, what happens to the iPad?

Apple answered that question last week with the launch of its new 11-inch and 12.9-inch iPad Pros. The 11-inch product effectively offers extra screen real estate in a form factor that’s roughly the same size as the previous 10.5-inch product. The real show stopper, however, is the 12.9-inch product. The previous version of this larger iPad was so big it felt a bit unwieldy in hand, but this new product feels dramatically more comfortable to hold and use. While the starting price of $999 will be tough for many to swallow, I expect that big, beautiful screen to drive many to buy this product.

Voice Plus Display is Smarter
One of the most interesting product trends of late has been the move by vendors to add screens to smart assistant products. Amazon did it first with the Echo Play, and the company has continued to iterate in the space with a second generation of that product, as well as the smaller Echo Spot. Google and its partners have recently entered the space, too. I’ve been using Lenovo’s Smart Display with Google Assistant, and I’ve been massively impressed with the product. I’ve written before about how Amazon Echos have taken over our house (six units and counting). But after I installed the Lenovo product in my office, I find myself going to it to ask questions more and more often. I am consistently struck by how much more useful a smart assistant can be with a screen attached. From showing photos, time, and temperature in default mode, to displaying the results of questions asked (in addition to announcing them), to letting me initiate a task by voice but complete it via the touch screen, the simple fact is the whole experience with the smart assistant is smarter with a display.

Dual Screens and Foldable Screens
So, the bottom line here is that even as voice becomes more prevalent, for now, and likely well into the future, screens will continue to play an important role in how we interact with technology. Which is why so many companies are feverishly working to bring to market new dual-screen notebook and foldable screen smartphone products.

I’ve been skeptical about the use cases for products like these. And when Samsung showed off this week its prototype foldable phone, my first response was: What problem is this solving? Followed by: Who wants a double-wide phone screen? While I’m not convinced that the first generation of these products will drive a ton of utility for most people, and the software challenges around the impact to user interface will be harder to address then most people realize, the fact is they could solve a problem: People’s desire for ever bigger screens.

The first generation of these products are important and necessary steps the industry must undertake. I’ve been following the display industry for many years, and while there have been many, many prototypes, the real learning happens when real products ship. So while I likely won’t be standing in any lines to buy the first dual screen or foldable device, I am very interested in the next-generation of form factors and use cases that will spring from what they learn.

At the end of the day, it’s all about offering even more screen real estate in existing or smaller form factors. While voice continues to improve as a mode of interfacing with technology, it still has a long way to go. In the near term, that means companies will continue to find ways to bring larger physical displays to products. Eventually, we’ll get to a point where the only way to offer a bigger screen will be in a pair of augmented reality glasses, where the entire world in front of you is the screen.

Thoughts on Factfulness by Hans Rosling

on November 8, 2018
Reading Time: 3 minutes

It’s been years since I’ve written a book report, maybe decades, but I’ve just completed a book that I want to tell you about. The book, “Factfulness” by Hans Rosling, is one of the most inciteful, irreverent, and fascinating books I’ve read in years. I discovered it by way of a post from Bill Gates, saying it was one of the most important books he ever read.

What initially appealed to me was its premise that things are really much better in the world today than we think. That’s something many of us would like to believe, especially during these turbulent political times, but probably are skeptical, as I was.

The book explains that we get things wrong for many reasons, including what makes the news, how we react to it, as well as relying on old beliefs. We learn pretty quickly how different the world really is.

It shows how we’ve rarely questioned or revisited our beliefs, even though much of the world has been transformed in recent decades. The book opens our eyes and provides reasons why we need to think in different ways.

It provides us example after example of how we are misled and confused about the state of the world and presents factual data that corrects our errors. The book is filled with graphs, charts and tables that add much to the author’s assertions and are often eye-openers.

What was most enjoyable was Rosling’s self-deprecating humor and unusual insights into how we come to our beliefs, why we think that way, and how we can be more objective. He uses numerous examples and stories gleaned from his travels fighting epidemics and conducting research around the world, meeting with numerous leaders.

Throughout the book Rosling reports the results of surveys he took among his thousands of audiences, asking simple multiple-choice questions about life in different countries. Time after time he points out how a chimpanzee making a random guess do a lot better.

One comes away with new revelations about the world and how much it’s actually improved in the past few decades.

While the book is focused on the state of the world’s condition, such things as income, living and medical conditions, lifespan, and education, it’s much more. It’s a handbook for improving how we think.

Hans Rosling, who passed away last year, was an Egyptian-born Swedish medical doctor, a professor of international health, and an adviser to the World Health Organization and UNICEF. His TED talks have been viewed more than 35 million times.

He wrote the book in the last years of his life along with help from his son and daughter-in-law, Ola Rosling and Anna Rosling Rönnlund, who invented a bubble-chart tool for displaying information in a very unique and highly visual way; that tool was eventually acquired by Google. They make heavy use of these charts throughout the book, including on the inside covers.

Nearly every page is filled with fascinating information that most of us are unaware of and is even counterintuitive. Rosling goes into detail as to why that’s the case, from explaining the motivations behind journalists, doctors and experts, to explaining the biases we each hold. He does it in logical and non-blaming ways. In fact, when it comes to blame, he has a fascinating section on it.

“The blame instinct is the instinct to find a clear, simple reason for why something bad has happened… It seems that it comes very naturally for us to decide that when things go wrong, it must be because of some bad individual with bad intentions. We like to believe that things happen because someone wanted them to, that individuals have power and agency; otherwise the world feels unpredictable, confusing and frightening.”

This is one of those books that you savor and don’t want to see end. You’ll think about things very differently after reading this remarkable book and might even believe that conditions in the world have never been better than they are today.

What the iPad Pro Enables Matters More Than What It Replaces

on November 7, 2018
Reading Time: 4 minutes

It has been quite fascinating reading the reviews of the new iPad Pro this week, and even more so reading the comments those reviews received. The process highlighted a few interesting challenges we have when we go through significant technology transitions and what might be holding us back from moving faster.

We Feel Compelled to Put Devices into Buckets

Of course, we give everything a name: a baby, a dog, and a device. We name items to market them and sell them which, of course, makes a lot of sense, but then because of that name we compartmentalize them to count them, project them and see how they are performing in relations to other devices.

Most of the times this works, especially when devices are single purpose: a camera, an MP3 player, a VR headset. But when devices can do multiple things and what they can do is determined by the software or apps they run or even by the accessories you can attach to them things get complicated very quickly.

Often we end up creating artificial categories based on how the product is marketed even though the way it is used is very different from what it says on the box.

Those Buckets Might Prevent Us from Looking Beyond the Hardware

For those devices that do multiple things, it is at times harder to see what they are competing with because the one thing that might drive buyers to get them in the first place is different from person to person. If you think about smartphones, for instance, some buyers might prioritize the screen size, others the camera, and others the quality of their calls.  While many will be similar the importance put on those features will be different and more importantly what people consider a deal breaker is likely to be even more different.

The look of a product is at time deceiving. Something might look like something else but behave very differently, or something might look very different but behave the same or drive the same behavior. Think about netbooks and notebooks for instance. On the surface, they looked very similar with the most significant difference being size. Yet their performance was very different which in turn drove very different use cases. Another example is lower-end tablets running Android. At the beginning of the market they looked very much like an iPad, but in countries like China, they were really competing against MP4 players because their software and weak app ecosystem limited their use cases to watching videos. When you think you are competing with something different than what you are, success is hard to come by as marketing, channel presence, pricing, target audience, all will be off for the product.

The look, however, often determines what people expect. If you quack like a duck you are a duck the same way that if you look like a notebook I expect you to run a specific OS, support a mouse, be great at productivity.

Let’s Talk About the Mouse and Other Must Haves

The PC market is a unique case study because of how many entrenched workflows users have. These are workflows that are old and might not be perfect, but the familiarity we have with them leads us to believe we cannot do without. The mouse is a good example. For many PC users the mouse is a core part of the experience which means that because the iPad Pro does not support a mouse, it cannot be a PC replacement.

Let’s look though at where the mouse comes from, which is a world where the PC was in the same place day after day, after day. Over time that PC moved around with us and the mouse cut the cord and came with some of us. I will always remember one day being in an airport lounge at SFO and seeing a lady who set herself up at a table with her laptop, a mouse and a portable printer. In her mind, I am sure, she was being mobile, but the reality was quite different.

If Apple thinks the iPad is a true mobile computer, then it makes sense that it is looking at an alternative to a mouse when you are trying to pinpoint something on the screen. If you are using the device on the subway, as you walk, in the car touching the device with your finger or the Apple Pencil makes much more sense than using a mouse. If I am right, mouse support is not a technical issue but a conscious decision of what fits the experience. The same can be said for support for a wired printer. This might not match people’s expectations of a true computer, but it matches human behavior.

File systems at an OS level is another behavioral debt we have. Does it still make sense to have one when much of our work is done in siloed apps and or stored in the cloud for easy collaboration?

Focus on What Devices Enable

We can argue as much as we want about whether or not an iPad Pro is a PC or not but I think we cannot argue with the premise of what the next computing experience is likely to include:

  • a mobile, not a portable device that is always connected
  • a more versatile operating system that transcends product categories
  • multiple input mechanism: touch, pen, and voice
  • “satellite” experiences driven by other devices such as AR glasses, wearables, IoT devices and sensors

Because of these characteristics, new workflows will be created, and old ones will evolve. Like for other industries, like the car industry, these changes will happen over the course of many years and impacting, people differently based on their line of work, their disposable income, the market availability in their region and most importantly their entrenched behaviors. Who gets there first does not get a medal but gets to show the way to others by showing what works and what does not.

 

Automotive Tech Now Focused on Safety

on November 6, 2018
Reading Time: 3 minutes

After years of hype and inflated expectations, it’s clear that the mania around fully autonomous cars has cooled. In a refreshing, and much needed change, we’re starting to see companies like Nvidia, Intel/Mobileye, and Arm now talk much more about the opportunities to enable enhanced safety for occupants in cars supporting advanced technologies.

It’s not that the tech industry is giving up on autonomy—as recent announcements about new rounds of trials from Lyft, Uber, and others, as well as advanced new chip designs clearly illustrate—but the timeframes for commercial availability of these advancements are starting to get pushed out to more realistic mid-2020 or so dates. Even more importantly, the messaging coming from critical component players is shifting away from roads packed with Level 5 fully autonomous cars within a few years, to ways that consumers can feel more comfortable with and safer in semi-autonomous cars.

Over the last few weeks, Nvidia, Intel, and Arm have all discussed research reports and technology advancements in the automotive market that are focused primarily on security, with the technology providing a supporting role. Nvidia, for example, released a comprehensive study called “The Self-Driving Safety Report” that provides a view into how the company incorporates safety-related technology and thinking into all aspects of its automotive product developments. The report covers everything from AI-based design, to data collection and analysis, to simulation and testing tools, all within a context of safety-focused concerns.

Intel, for their part, released a comprehensive study on what they termed the Passenger Economy this past summer, but recently touted some findings from the report that focus on the relatively slow consumer acceptance for self-driving cars due to concerns around safety. Essentially, while 21% of US consumers say they’re ready for an autonomous car now, it’s going to be 50 years before 63% consumers believe they become the norm. To address some of these concerns, Intel is touting its Responsibility-Sensitive Safety (RSS) model, which it describes as a mathematical model for autonomous vehicle safety. The idea for RSS is to develop a set of industry standards for safety that can then be used to reassure consumers in a transparent way about how autonomous cars will function. Recently, Intel announced that Baidu had chosen to adopt the RSS model for its autonomous driving efforts in China.

Back in late September, Arm announced a new program called Safety Ready that ties together a number of the company’s security and safety technologies into a unified structure that, while not limited to the automotive market, is very well-suited for it. Safety Ready incorporates both chip IP designs and software that are focused on applications where functional safety is critical and allows the company to meet the key automotive-related functional safety certifications, including ISO 26262 and ASIL-D. At the same time, the company also introduced a new automotive-specific chip design called the Cortex-A76AE that integrates a capability called Split-Lock that allows a dual-core CPU to either function as two individual components doing separate tasks or two single components running in lockstep, where one can take over immediately if the other fails. As in many automotive applications, redundancy of functions is key for safety concerns and the Split-Lock capability of this new design brings it to digital components as well.

While it may seem that all these announcements are a somewhat dramatic shift from how the tech industry had been talking about autonomous cars, in reality they are simply part of a maturing perspective on how this market will develop. In addition, they’re based on some practical realities that many in the autonomous automotive industry have started to recognize. First, as research has continued to show, most consumers are still very leery of autonomous car features and aren’t ready to trust cars that take over too much control.

Second, the level of difficulty and technical challenge in getting even basic autonomy features to work in a completely safe, reliable way is now recognized as being even harder than it was first believed. Even semi-autonomous cars integrate an extraordinarily complex combination of advanced technologies that includes AI and machine learning, advanced sensors and fusing of sensor data, and intelligent mapping, all of which have to work together seamlessly to ensure a safe, high-quality driving experience. There’s no doubt that we will start to get there, but for now, it’s reassuring to see companies focus on the critical safety enhancements that assisted driving features can bring, as we look further out to the world of full autonomy.

The Pen(cil) is Mightier Than the Mouse

on November 5, 2018
Reading Time: 6 minutes

Apple is touting the newest iPad Pro models as the biggest change that has come to iPad since the original device launched. For Apple, iPad has always been their clearest vision of a large screen computing device. Like all of Apple’s products, iPad has an industry-leading customer satisfaction rating, and nearly all consumers of all ages love their iPad. Where iPad has succeeded in consumer sentiment and satisfaction, it has still not fully met its full potential as the perfect large screen computer for the masses.

Fascinatingly, in poll after poll I’ve seen in both our own consumer research and other sources, many consumers indicate they would gladly make the iPad their primary large screen computer, ditching their legacy PC or Mac, if they believed they could. While we can debate whether this perception is the reality, there is no debating the positive sentiment and desire to go all in on iPad by a large number of consumers.

Even in tech circles, the desire is high to jump to what is considered a better way forward. In countless conversations, I’ve had with techies, and even tech-leaning early adopters in our research panel and on Twitter, I continually hear how they would love to switch to iPad if they could. When I press deeper on the reasons why they feel they still need a PC/Mac it often comes down to a mouse/trackpad.

More specifically, they explain how many of their primary workflows still require a mouse/trackpad and the precision of a mouse pointer and the many legacy apps which make efficient use of this legacy solution. After spending some time using the new iPad Pro and Apple Pencil, it left me thinking the future of precision input tools for iOS may be the Apple Pencil not a mouse/trackpad.

The Pen(cil) is Mightier Than the Mouse
After Apple announced the new iPad Pros, along with an updated Apple Pencil, I began to look at Apple’s website for iPad and looked specifically at the areas where they highlighted some of the new gesture features of Apple Pencil and software experiences that support this feature. It was then I began to wonder if Apple Pencil is Apple’s mouse, or more specifically, their precision input tool for iPad.

With this thinking, I’d like to draw a potential parallel. When the iPhone first came out, there was a heavy dose of skepticism about Apple’s all screen phone with a soft-keyboard. Every single smartphone at the time had a smaller screen but a physical keyboard. Apple’s rationale for the soft keyboard was straightforward. They could simply do more from an input standpoint with a software-based keyboard than with a physical keyboard which functions are fixed in the hardware. Things like multiple language support, emoji, and a range of other software based features took smartphones text input to an entirely new level and and there is no going back.

I’m curious if Apple thinks about Apple Pencil in a similar way. Where the software based keyboard for iPhone opened up entirely new forms of input and new, more efficient type based workflow, I think Apple Pencil has the potential to do the same thing from a precision pointing/input tool.

Apple’s new gestures clearly support this theory. Apple may be easing people into this new functionality but the idea of a multi-touch function on the Apple Pencil seems like a logical path forward. At the moment, you can customize the double tap gestures on Apple Pencil to switch between the two tools you use the most. However, ProCreate implemented a clever example that brings up a more sophisticated menu designed to use the Pencil gestures to enable a more seamless workflow. Here is a picture of what a double tap with Pencil brings up when you enable.

While in it’s early stages, I contrast this with Microsoft’s Windows Pen/Ink support. With Windows a mouse is still available to use as a precision pointer, meaning most people will still choose this method by default. The presence of a mouse/trackpad enables the user to stay in their comofort level rather than embrace a new paradigm. The implication is the pen is likely only used for pen use cases and not in a position to become the new mouse. The lack of a traditional mouse pointer/trackpad is precisely the reason Apple has the potential to innovate and encourage software to be built around new precision input paradigms instead of old ones.

It will be very interesting to see how Apple continues to develop the gestures feature on Pencil and what developers do with it. I’d love to see Microsoft run with this idea and explore new Office experiences focusing on the pen as a new type of precision input that can go beyond just ink/drawing.

From a use case standpoint, when you need a precision input tool, there is not that much different in motion than taking your hand off the keyboard and using a trackpad then picking up the pencil. And, I’d argue that what can be enabled by pencil, gestures, and software, will take precision input to a level not possible with a trackpad/mouse.

A few Points about the New Keyboard
I’ve been very clear in my affection for the keyboard design of the Microsoft Surface. As a touch typist, and someone who types north of 7,000 words a week, the feel of the keys when typing is important to me. While not yet perfect, Apple’s new folio keyboard is a huge step up for someone who does a lot of typing like me. I also prefer the additional rigidity of the keyboard cover which closes like and feels like a hardcover book when closed and sits firmly on your lap or table with a strong sense of stability.

Another welcomed improvement and something I have been asking for with the iPad keyboard experience was a way to alter the screen angle. With the previous generation iPad Pro case you could only use the screen in 65-75 degree angle. The new iPad Pro Folio case has an additional groove to dock the screen that is closer to a 90 degree angle which is a more common notebook orientation. Having both these options is excellent for both lean back and lean in workflows and experiences.

Lastly, a subtle point about magnets. Apple’s material science and design team does some excellent work with magnets. The Pencil’s magnetic dock and charging experience allows you to carry the pencil without worry of it falling off. Upon first dock you will find it surprisingly strong. Similarly, the magnetized grooves on the keyboard dock are surprisingly strong. This makes all the difference in the world when you are touching the screen with any degree of force. The firmness and stability of the iPad when using with the new keyboard cover design certainly feels much more notebook-like than ever before.

A Whole Computer and Nothing But
To truly understand the iPad, I firmly believe you have to take the long view. Most consumers are not power users. Most consumers use their smartphones more than their notebooks and most have a somewhat dysfunctional relationship with their notebooks. Most kids are similarly growing up in a world where their smartphone is their primary computer and their comfort level with mobile operating systems is drastically higher than their comfort level with legacy desktop/notebook operating systems. It is within this environment that I feel the iPad will find its sweet-spot.

Apple’s challenge is to make the iPad a somewhat new experience and one that challenges the idea of what you can do with a big-screen computer. While Apple is continuing to make solid improvements in hardware, it will truly be up to their software developers and broader ecosystem to buy into the vision and participate in creating the future. But within that thinking still lies a distinguishing observation about the way a person may perceive the iPad vs a notebook while they are shopping.

When you buy a Mac or a PC, you get the whole computer experience out of the box. You get the screen, keyboard, and mouse/trackpad. That is still not true of iPad. If Pencil can live up to its potential as the precision input tool of the future, then it is likely Apple would need to include this in the box at some point. Similarly, the Keyboard may need to be included. My concern is that the market may perceive the emission of these critical accessories as an admission by that it is not yet a full computer. They may look at the total price and justify getting a PC or Mac because it comes with all the inputs they need in the box for one set price. The iPad is really not a full computer, the way Apple wants to position it, without the Keyboard and potentially without the Pencil. Someday, if Apple hopes the market to perceive iPad as a whole computer it may be necessary to include all the critical components in the box.

After using the new iPad Pro, I’m more convinced than ever that Apple is heavily invested in continuing to advance iPad as the future. It has taken baby steps to get here, and will still take baby steps forward as it gets closer to reaching its full potential from a hardware, software, and accessory standpoint. But what is most interesting to me in this thinking, is how when I think about the traditional PC/Mac form factors it seems as though those devices have essentially reached their peak where iPad still has a lot of opportunities to grow and many future innovations in store.

News That Caught My Eyes: Week of November 2, 2018

on November 2, 2018
Reading Time: 4 minutes

Google’s Employees Walk Out

On Thursday, there were a series of employee walkouts at around twelve Google offices around the world. The demands of the employees:

  • And end to forced arbitration
  • A commitment to end pay and opportunity inequality
  • A publicly disclosed sexual harassment transparency
  • A clear, uniform, globally inclusive process for reporting sexual misconduct safely and anonymously.
  • Promote the Chief Diversity Officer to answer directly to the CEO and appoint an Employee Representative to the Board

 

The Next Two Years Will be Pivotal for Wireless and Telecom

on November 1, 2018
Reading Time: 4 minutes

The conventional wisdom is that the wireless industry has sort of stagnated. Differences in wireless coverage, data capabilities, and pricing have narrowed. Most smartphones are good, if not great, and even the latest flagship phones show more ‘small i’ than ‘big I’ innovation. It’s been awhile since there was a real blockbuster app. And IoT, while growing modestly, is still waiting for its breakout.

But fasten your seatbelts, because the 2019-2020 are going to be the most pivotal years in telecom in some time, setting the stage for the next phase of connectivity. Of course, 5G will be the headline story. Although we’ll see a handful of deployments before the end of this year, 2019 is when 5G gets going in earnest. That’s when 5G New Radio (NR) becomes commercially available, and true, standards-based 5G services are launched by all major U.S. operators. Additionally, during the first half of 2019, we’ll see the introduction of several 5G smartphones, although it will likely be 2020 before Apple announces a 5G iPhone.

By a year from now, all major U.S. operators will have rolled out 5G to a healthy number of major cities – each of them with a slightly different approach/strategy. There will also be 5G launches in several other countries – notably South Korea, Japan, China, and a handful in Europe. So we  should have a pretty good idea of what this first wave of 5G looks like, and whether there are any breakout business cases.

In the U.S., I’m expecting two flavors of initial 5G services: hot spots in cities that will deliver more of a wow factor on speed; and ‘coverage’ oriented 5G, such as what T-Mobile is planning with its 600 MHz-based launch, which will cover a broader geography but will look more like 4G Plus or 5G Minus. Also by this time next year, Verizon will have launch its fixed wireless access service in a multitude of cities, some of them on standards-based equipment, and we’ll have a much better sense of how well 5G mmWave works as an option for fixed wireless access and whether Verizon’s service is taking any meaningful share from the incumbent broadband providers.

And finally with respect to 5G, mmWave spectrum auctions are starting in November and will continue in waves through much of 2019. We’ll see if it’s just the rich getting richer or whether there are any potentially viable new entrants. There are also a large number of 5G spectrum auctions scheduled to occur in Asia and Europe. These will be pivotal in their own right, but will also influence the development of a ‘global’ 5G band and will put pressure on the U.S. to move on mid-band spectrum.

There will also be several developments next year that will set the tone for years to come. The 3GPP is expected to approve Release 16 in the 3Q-4Q timeframe. There are many key aspects here, among the most notable being Ultra-Reliable Low Latency Communications (URLLC, say that five times quickly). This will be among the anchors for exciting new applications leveraging 5G, such as AR/VR, plus the other ‘pillars’ of 5G beyond Enhanced Mobile Broadband, such as industrial IoT and autonomous vehicles.

The launch of CBRS will also be an exciting development. In addition to introducing an innovative spectrum sharing infrastructure that could be a model for other countries and future spectrum auctions, we could also see a new model for indoor coverage and services. The PAL auctions for CBRS will occur later next year or in early 2020. This is also an area where there could be new market entrants, such as cable companies, wireless internet service providers (WISPs), and perhaps some of the more influential Silicon Valley types.

During 2019, we will also see further definition of the industry’s future structure, starting with the likely approval the T-Mobile/Sprint deal will be approved. This could trigger some other M&A activity, more broadly focused on the future of broadband than just relegated to the wireless sector. DISH will also figure in here. Even as it builds its NB-IoT network, we will know over the next 12-18 months whether its spectrum assets are acquired by another operator or whether it plans to build a wholesale 5G network, for which an anchor tenant (Amazon?) is needed.

If you’re not already out of breath, our eyes are also on the following:

  • The launch of T-Mobile’s Mobile TV service based on the Layer3 acquisition. The OTT TV space is pretty crowded already, so T-Mobile will have to do something characteristically disruptive to make a mark.
  • AT&T’s project AirGig. There have been some interesting trials. We’ll know more within a year or so as to the potential commercial viability of this technology.
  • Small cell siting. A Battle Royale is shaping up between the feds and the municipalities regarding new rules on small cell siting (pricing and shot clock). The results here will have important implications on further 4G densification and how quickly high-band 5G is rolled out. The “U.S. Competitiveness” card could be a factor here, as other countries such as China and South Korea are able to get small cells deployed at a much more rapid clip due to a different model of ‘government involvement’.

Most importantly, within 12-18 months we’ll have a much better sense of what 5G really is and can do. This will set the stage for the development of other use cases beyond enhanced mobile broadband. The fixed wireless access piece is important as well. If 5G wireless can be a viable option for broadband, there could be a shakeup of cable’s near monopoly in that market. Who knows – maybe by 2025, a fair number of households will only have to pay for one broadband connectivity solution.

Apple’s Vertical Integration Shines with the New iPad Pro Line

on October 31, 2018
Reading Time: 4 minutes

This week at an event in Brooklyn, Apple launched a new MacBook Air, a new Mac Mini, and two new iPad Pro. It is interesting to me that these devices came to share a stage because in a way they represent key products in the history of Apple’s computing offering. The Mac Mini reinvented the desktop computer, focusing on a minimalistic design yet without sacrificing performance. The MacBook Air took the MacBook line to a higher degree of mobility and introduced an all Flash architecture. And finally, the iPad Pro started to lay the foundation for what Apple calls the “future of computing”. I want to focus on the iPad Pro because to me it is certainly the product with the most fascinating but also the most complex story to tell.

The iPad’s Journey

When the first iPad came to market in 2010 the easiest way to explain it was to describe it as a big iPhone. Users were in love with their iPhones and they were spending more and more time on mobile devices to the detriment of many other things including their computers. At least at the start of the whole tablet market, these devices, including the iPad, were seen as a companion to a computer as well as a companion to a smartphone. Back then the four biggest shortcomings that people listed as reasons why the iPad could not replace their computer were: the operating system, the keyboard and screen size, and the performance.

Since then, the iPad has grown in size, power and brain in particular with the iPad Pro. So much so that the latest additions to the line are, as Apple pointed out, faster than 92 percent of the PC sold over the past year. You might argue on this number but the fact of that matter is that the new iPad Pro models are as powerful as many PCs.

What has also grown is the numbers of apps that have been designed for the iPad to take advantage of both a touch and pen-based workflow. At the same time, we have also seen workflows shift more and more to apps which are helping to see less of a difference between what an iPad and what a Mac could do, especially as a consumer.

The latest iPad Pro models and their increased performance push apps further like Adobe demonstrated on stage. Adobe was an early believer in the iPad, designing a mobile version of its Photoshop app with touch and pen in mind. At the event this week, Adobe was on stage showing what they referred to as the “real” Photoshop which will arrive in the App Store in 2019 and offers a full desktop app still optimized, of course, for touch and pen. What was interesting to me, as Adobe went on to show Project Aero, their AR focused creativity suite, was that now on the iPad Pro you can do what you do on a computer and more. More importantly, you are not necessarily bound to do it in the same way you used to. You can be creative or productive in similar or different ways, the choice is really yours.

The Future of Computing is not for Everybody…Yet

I said several times that I was looking forward to an event where new Mac and iPad models were introduced side by side because I wanted Apple to tell a story. A story of where computing is going and which device is for whom. Apple did not tell a story, but it certainly tried to shape our thinking around the iPadPro by talking about sales and performance compared to notebooks. This is where Apple believes the iPad Pro is competing. But Apple realizes that this transition is not going to be as simple as the kind of change they introduced with the MacBook Air. This is because with mobility the biggest change in workflow was where you could work not how. Embracing new workflows will take a while.

It was fascinating to have people point out to me after the event that the only thing that the iPad Pro is missing now to be a computer is mouse support. This to me is a symptom that shows how these people are not ready to transition all their computing needs to an iPad Pro. Either because of comfort or because of the kind of tasks they perform, a touch and pen first workflow would not suit them. This is why Apple continues to update its Mac line. But also, this is why Mac OS, while remaining a separate OS, will allow for apps to look and feel more like they do on iOS so that workflows could seamlessly go between an iPhone or an iPad to a Mac. The more you will be able to do that the more you will be able to consider an iPad Pro as your main computing device.

Build it and they will come

For Apple, there is no question that the future of their computing experience is in the iPad Pro rather than the Mac. You just need to look at the latest products to see how much Apple owns the experience on iPad Pro from the silicon for both performance and intelligence, to the ecosystem of apps and services, to the accessories, pretty much everything they need to control the end to end experience.

It is interesting that when I look at Surface Pro, the sole iPad Pro competitor in my mind,  I see clearly that the lack of that vertical control is what is holding them back especially in the consumer segment. Ironically Microsoft is better than Apple at first-party apps that take advantage of what the OS and the hardware have to offer to drive their vision of new workflows, but the lack of custom silicon and the much weaker App Store makes it much harder for them to compete on equal footing.

Apple has been known to drive change even when the market does not seem to be ready. With the future of computing, they have the luxury of not having to rush. They have built a strong platform and they will continue to lead people to it without yanking away the safety net that Mac products provide to many.

Buying New Tech Before the End of the Year

on October 29, 2018
Reading Time: 4 minutes

If you are keeping up with the news, you know that there is a trade battle going on between the US and China. Our president has already placed significant tariffs on many products imported from China and is looking at adding another $250 billion in tariff’s that would cover just about all products coming from China. While talks continue to go on between China and US, with trade officials trying to avert these new round of tariffs, many of my the sources in Washington tell me that they believe it is inevitable that President Trump will enforce these new tariffs after the first of the year.

To date, most, if not all of the major tech companies, have had their lobbying arms trying to get the President to back off these tariff threats and to try to find a diplomatic resolution to this trade problem. However, many in Washington are doubtful that China will give in to the US trade demands and are now starting to work out how new tariffs would impact them shortly.

Most tech companies are now doing some significant long-term planning to try and find ways to avoid paying these tariffs by looking at moving some of the final test and assembly to other countries like Viet Nam, Malaysia or India. They would then ship these products from there, thus avoiding any Chinese tariffs. However, since so much of tech is made in China and will be shipped from there, it would be difficult for the majority of companies to employ this tactic to avoid paying what may be as much as a 25% tariff on goods shipped directly from China.

The economists I have talked to about the impact these tariffs would have on PC’s and Laptop prices say that the worst-case scenario is that it would add a full 25% to the final consumer price of a laptop or PC shipped under these new tariffs. In this case, PC vendors would pass all of the tariffs onto the customer to pay this increase.

A best-case scenario is that the PC or laptop companies eat some of the profit margins and take some of the tariff burdens from the customer and could pass on half or a portion the cost of the tariff to the final buyer.
In either case, after new tariff’s become law, it is very likely that laptops and PC’s will have higher prices. That is why, if you are in the market for a PC or laptop, it would be wise to consider buying them before these tariffs go into effect.

From an industry standpoint, any new tariffs could not have come at a worse time. For the last five years, the demand for PC’s have steadily declined and only this year have we seen a slight uptick in PC and laptop demand. Even more interesting, the growth has not come in the low end of the PC and laptop market where margins could be as low as 3%. The area of PC and laptop growth have been in the $799-$999 range, and we have even seen strong sales for PC’s and Laptops in the $1100 to $1500 price range too.

While margins are better for products in these price ranges, how the PC vendors deal with their pricing due to these tariffs is not clear. As I stated above, they could eat some of the margins to offset price rises from the tariff’s, but some of the tariff costs will be passed on to the customer if they want to remain profitable.

However, the tariff impacts on the tech companies today is not the biggest problem they will have due to other trade issues with China in the future.
That will come from the initiative in China that wants only products made in China sold to the Chinese public by 2025. Called the Made in China 2025 policy, China’s current leaders are moving the country to be independent of products and services made anywhere but in China.
While 100% of the products and goods China needs can never come from or be made in China, they are working hard to get as much created and manufactured in China by 2025 as possible.

For example, China is the largest market for US Soybeans. They have a plan in place to spend billions on soybean farming in various areas of China and by 2025, plan to be 100% self-dependent for their soybean needs. China has already put tariffs on US Soybeans, and by 2025, they plan to make no purchases of US soybeans at all.

While Trump has tried to get more US Companies to manufacture in the US, many of the tech companies are instead expanding their mfg for the Chinese market in general and then trying to find ways around the tariffs by pushing final test and assembly out of China. One PC maker told me that should they even want to manufacture in the US, cost of labor and increased real estate and manufacturing costs would add at a minimum 25-30% to the final price of their PC’s or laptops. So even with paying the tariff’s now (which they hope will be a short-term issue), it would not make that much difference to bring that manufacturing back to the US.

As I look at the current crop of mid to high-end laptops and PC’s, it is clear that you can get a lot of technology still at reasonable prices now. But once the new tariffs kick in, if your PC maker has not found a way to get around these tariffs, prepare to pay higher prices for that special desktop or laptop you can get today at reasonable prices now.

Podcast: Q3 2018 Tech Earnings Analysis and Outlook

on October 27, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the major tech earnings from this week including Amazon, Google/Alphabet, Intel, Microsoft and others, and analyzing how the overall tech market is evolving.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

News That Caught My Eye: Week of Oct 26, 2018

on October 26, 2018
Reading Time: 5 minutes

Twitter Stock Up After Earnings

Stock up 17% on Thursday after earnings results are out.

Key data points:

  • Earnings per share of 21 cents (adjusted) vs. 14 cents expected by analysts
  • Revenue: $758 million vs $702.6 million, according to the survey
  • Monthly active users (MAUs): 326 million vs. 330.1 million projected by FactSet and StreetAccount

 

New Phones Helping with Old Phones Addiction

on October 24, 2018
Reading Time: 5 minutes

As we are reaching the end of the year, we will soon see coverage summarizing the tech trends of 2018. When it comes to smartphones, no doubt, edge-to-edge displays, AI, ML, cameras will all make the top ten list and so will phone addiction.

Over the past 12 months we have seen vendors playing in the smartphone market, both at an OS or hardware level, come up with solutions to help us monitor and manage the time we spend on these devices. More recently, we have seen a couple of vendors pitch smaller form-factors with a more limited set of features. These new phones are positioned as a companion to your main smartphone for those times when you want to disconnect a little and be more in the moment.

As a concept, this is not at all different from what we experienced at the start of the smartphone market when people would still have a featurephone to use over the weekend. What is different though is the reason behind the trend. Back then smartphones were really more of a work thing, which meant having them with you all the time meant you were on email all the time and therefore on the clock 24/7. Getting that featurephone for the weekend was much more about taking a break from work than taking a break from technology. The promise these new devices are making is to allow you to free yourself from the grip of your smartphone which consumes too much of your time. But is it as simple as wanting to spend time on these screens?

Phones Do So Much, How Can We not Love Them?

I am not going to argue the relationship with our phones is a healthy one, but I will argue most of us do not really want any help. The point is that these phones do a lot for us today. They have come to replace so many other devices in our lives that it makes sense we spend more time on them: listening to music, taking pictures, watching TV shows…We can also do many of the tasks we used to do only on a PC: search, shop, email, game… So our time staring at the little screen has grown.

For many people, smartphones have become a productivity as well as an entertainment center. I took a quick look at my usage over the past seven days, and apparently, I picked up my phone on average 94 times, I receive on average 240 notification per day and my usage when I am not traveling is at least 40% lower than when I am away from a computer. Some of these notifications are from the doorbell or our home security cameras so not something that necessarily requires interaction on my part, but something that asks for attention nevertheless. When we dig a little deeper, however, it is fascinating to see how much of what we can do with the phone today we used to do not just with a different device but with a non-tech tool altogether. Over the past seven days, for instance, I spent 21 minutes using my phone as a wallet, 38 minutes using it as a camera, I also used it for one hour to navigate my way around the world, 39 minutes talking to someone over Facetime, 21 minutes filing my expenses, 19 minutes booking flights, 15 minutes shopping on Amazon and one hour and 52 minutes doing email. I don’t really feel sorry about any of this as I see the phone just as a tool consolidation effort. Had I done all those things with different devices there would have been no reason to say I was addicted to something.

We Do What We Want Not What We Should

The problem starts, at least for me, when I see that over the past week I spent over 8 hours on social media. Now part of it is work, part of it is information, but truth be told a lot of it is boredom. For me, and many others, the smartphone is no different to the TV. We used to watch TV or have the TV on to fill our time even when nothing was interesting to watch. The smartphone is like having access to that TV whenever and wherever you want to tune into the reality show that is social media or gaming or anything else that helps you fill a void.

This is where the addiction is, in that filler role that smartphones play. And it is hard to let go without experiencing some level of FOMO. Like any addiction, self-discipline is not always enough. I know too much chocolate is bad for me, and I can try and limit myself to one square, but it is so much easier when I do not buy any at all. This is the premise of these new phones like the Palm. They are the equivalent of you not buying the chocolate. One could set up limits for all the apps and tools available on the phone through one of the new features like Apple’s Screen Time. But that would be the equivalent of limiting yourself to one square knowing you have a whole bar of chocolate in the cupboard.

It might be sufficient for some, as long as you first admit you have a problem of course. But what happens when you get back to your main smartphones? Are you going to binge use it to make up for the lost time?

Smartwatches Give Me All the Help I Need

This is why learning to control your use, in my view, will have more long-term effects and in my case, smartwatches have really helped. Yes, I know given the numbers I just shared you are scared to think what my usage was like before!

Smartwatches allow me to take a break from my phone by preventing me from being sucked into the phone for longer than I need to be. Continuing the chocolate metaphor wearables are the equivalent of someone breaking up a square and giving it to me while hiding the rest of the chocolate.

All those notifications I receive are not always essential. The important ones like a text from my daughter, a call from my colleague or a flight change will come to my wrist, but an Instagram like, a Facebook post or a non-work email will not. Being able to prioritize what is time sensitive and what is not is a great help in cutting back on the number of times you unlock your phone. Getting the urgent stuff to your wrist also makes sure you do not live in eternal fear of missing something, which leads to picking up the phone more often than you need.

Of course, smartwatches are not a magic wand. Users need to spend some time deciding what they want to prioritize so that the watch does not duplicate the phone. I might be wrong, but using a smartwatch to tame phone usage requires an investment in understanding where the problem is. With the simpler phone, the risk of changing behavior to fit the new device is as strong as relapsing on the smartphone that is still in my pocket. In other words, if I switch my social media time to texting time because my simpler phone does not support apps, I am just changing my addiction not controlling it.