What I Learnt about My Kid After a Week with a Phone

My daughter is almost eleven, and she is yet to get her own phone. She really does not need one as she is homeschooled and we are the ones taking her to most of her activities. For work and play, she has both a Surface Laptop and an iPad. But last week was different as I was attending the Qualcomm Snapdragon Summit in Maui and she came with me. I let her borrow an iPhone XR so she could keep me posted of her and my mom’s whereabouts over iMessage.

Electronics at home are an “earned privilege.” Most days we trade electronics time for reading, focus during homeschool time or outstanding behavior. Of course on holiday with grandma, while mom was busy in meetings, that earning factor went out the window and there was access to an iPad in the room and an iPhone outside.

Not that I planned on it, but this “experiment” of mine came on the back of the 60 minutes special on-screen addiction, which provided a lot of information of the impact of screen-time on young brains. If you watched that or just read through the summary here, you will think I am crazy to think that after a week try-out I still plan to give my child a phone. The core reason behind my decision is that I saw how the phone in the big-wide-world for her was a tool even more so than any electronics she uses at home.

Personalization, Mobility, and Camera

Despite knowing that the phone was a week loaner, my daughter took great care in making it hers. After using her Apple ID to log in and restore from her iPad backup, she changed her locked screen, her background, chose her screen brightness and sound – which to my surprise was not mute as it has been for me for years now. You could clearly see that personalizing the phone was a way to express herself.

It was also evident that the increased mobility of the phone form factor over her iPad got her to use it more across the board but also a little differently. This meant taking advantage of different features like maps to navigate our outings, nature apps to recognize local flowers and fish, and a translation app from English to Hawaiian.

The camera that at home is mostly used to have fun with stickers and silly videos became a tool to record fun moments with a friend or document local wild-life not for posting on social media but merely to make memories. Thankfully, aside from what mom posts, my kid has no social media presence, which for some parents, I know, is too much already!

Different Gaming

Seeing how my daughter can turn into a “gremlin fed after midnight” when at home she has to stop gaming, I was concerned that a phone would just make things worse. I was amazed to see that she was playing different games to what she usually plays on her iPad. Minecraft, Fortnite, and Roblox were put aside for Sonic, puzzle games, and brain training games.

It seemed as though, her iPad is her proper gaming device while the phone was indeed a mobile gaming device used more as gap filler with simpler games. The proper gaming session would return in the evening when the phone turned into her music player, and the iPad resurfaced for gaming.

While we have consoles at home, the time spent on them is somewhat limited mostly because my child is a touchscreen gamer by nature and the controllers, to this day, feel a little foreign to her. I do wonder if the success of the Nintendo Switch – which is on her Santa’s list – is due, in part, to its ability to bridge the console and tablet experience so well.

Obsession vs. Addiction

What transpired from the week was also the difference between obsession and addiction. There is no doubt that some games lead to behaviors that closely resemble addiction. For my kid Fortnite, Minecraft and Roblox certainly show the dark side of her. Yet, what drives her is a little different for each game. With Fortnite it is about team play and not letting the team down in the game, Minecraft is about accomplishment and in Roblox is more about the social aspect of it, players are friends from school we talk to!

Outside of gaming though I feel that it is more about little obsessions rather than addiction. This is no different than discovering a book series and wanting to read the whole set in one go, or watch a movie like “Black Panther” enough times to know almost the entire script!

So in the week with her phone, I saw the current Avengers craze moving from the TV screen and comics to memes, a new vehicle for her obsession enabled by the internet. I am sure that in a few months we would move on to something else in the same way we liked pizza for the past six months and now we hate it!

My main issue with studies that look at screen time as a generic thing is that things are not as simple as that. So I do not dispute science. I am sure young brains are affected by what kids do on these devices, but it is precisely what they do that we need to look into. One data point mentioned in the program was that toddlers asked to return an iPad used to play an instrument in a game did so 45% if the time while toddlers who played a real instrument did so 60% of the time.  The key here is not the screen, but the app and the gratification the app gave through different stimuli. I am sure if you tested kids doing math with pen and paper and kids doing the math on an iPad they would stop at the same rate when asked!

My Key Takeaway

So what did we learn?

For my daughter, after a week in Paradise, the biggest sadness came not from leaving sunshine, pools, and turtles but rather from returning the phone to me.

For me, this week was key to understand that learning how my child uses technology is no different than figuring out what sports she wants to engage with, what movies are appropriate for her and to some extent what kind of human being she should be. Like anything else, kids need guidance on what is right for them and what it is not, but this has much more to do with how they engage with the screen than screen time per se. In other words, not all activities done through a screen are created equal, and it is up to me as a parent to guide her to those that enrich her life. Of course, guidance alone is not going to be enough, if you are a parent you know that, so having tools that help you monitor, set the right access and making sure your child is not taken advantage of are indispensable. Pandora’s box does not have to be ripped wide open!

Microsoft Browser Shift Has Major Implications for Software and Devices

Sometimes it’s the news behind the news that’s really important. Such is the case with the recent announcement from Microsoft that they plan to start using the open source software-based Chromium project as a basis for future versions of their Edge browser. At a basic level, it’s an important (and surprising) move that seems significant for web developers and those who like to track web standards. For typical end users, though, it seems a bit ho-hum, as it basically involves under-the-hood changes that few people are likely to think much about or even notice.

However, the long-term implications of the move could lead to some profoundly important changes to the kinds of software we use, the types of devices we buy, the chips that power them, and much more.

The primary reason for this is that by adopting Chromium as the rendering engine for Edge, Microsoft should finally be able to unleash the full potential of the platform-independent, web-focused, HTML5-style software vision we were promised nearly a decade ago. If you’ll recall, initial assurances around HTML5 said that it was going to enable software that could run consistently within any compatible browser, regardless of the underlying operating system. For software developers, it would finally deliver on Java’s initial promise of “write once, run anywhere.” In other words, we could finally get to a world where everyone could get access to all the best software, regardless of the devices we use and own, and the ability to move our own data and services across these devices would become simple and seamless.

Alas, as with Java, the grandiose visions of what was meant to be, didn’t come to pass. Instead, HTML5-based applications struggled with performance and compatibility issues across platforms and devices. As a result, the potential nirvana of a seamless mesh of computing capabilities surrounding us never came to be, and we continue to struggle with getting everything we own to work together in a simple, straightforward way.

Of course, some might argue that they prefer the flexibility of choices and unique platform characteristics, despite the challenges of integrating across multiple platforms, application types, etc., and that’s certainly a legitimate point. However, even in the world of consistent software standards, there was never an intention to prevent choice or the ability to customize applications. For example, even though Chromium is also the web rendering engine for Google’s Chrome browser, Microsoft’s plan is to leverage some of the underlying standards and mechanisms in Chromium to create a better, more compatible version of Edge, but not build a clone of Chrome. That may sound subtle, but it’s actually an important point that will allow each of these companies (as well as others who leverage Chromium, such as Amazon) to continue to add their own secret sauce and provide special links to their own services and other offerings.

By moving the massive base of Windows users (as well as Edge browser users on the Mac, Android, and iOS, because Microsoft announced their intentions to build Chromium-powered browsers for all those platforms as well), the company has single-handedly shifted the balance of web and browser-based standards towards Chromium. This means that application developers can now concentrate more of their efforts on this standard and ensure that a wider range of applications will be available—and work in a consistent fashion—across multiple devices and platforms.

There are some concerns that this shifts too much power into the hands of a single standard and, some are worried, to Google itself, since it started the Chromium project. However, Chromium is not the same as Chrome (despite the similar name). It’s an open source-based project that anyone can use and add to. With Microsoft’s new support, they’ve ensured that their army of developers, as well as others who have supported the Microsoft ecosystem, will now support Chromium. This, in turn, will dramatically increase the number of developers working on Chromium and, therefore, improve its quality and capabilities (in theory, at least).

The real-world software implications for this could be profound, especially because Microsoft has promised to embed Chromium support into Windows. What this will do is allow web-based applications access to things like the file system, being able to work offline, touch support, and other core system functions that have previously prevented browser-based apps from truly competing against stand-alone apps. This concept, also known as progressive web apps (PWA), is seen as being critical in redefining how apps are created, distributed, and used.

For consumers, this means the need to worry about OS-specific mobile apps or desktop applications could go away. Developers would have the freedom to write applications that have all the capabilities of a stand-alone app, yet can be run through a browser and, most importantly, can run across virtually any device. Software choices should go up dramatically, and the ability to have multiple applications and services work together—even across platforms and devices—should be significantly easier as well.

For enterprise software developers, this should open the floodgates of cloud-based applications even further. It should also help companies move away from dependencies on legacy applications and early Internet Explorer-based custom enterprise applications. From traditional enterprise software vendors like SAP, Oracle, and IBM through modern cloud-based players like Salesforce, Slack, and Workday, the ability to focus more of their efforts on a single target platform should open up a wealth of innovation and reduce difficult cross-platform testing efforts.

But it’s not just the software world that’s going to be impacted by this decision. Semiconductors and the types of devices that we may start to use could be affected as well. For example, Microsoft is leveraging this shift to Chromium as part of an effort to bring broader software compatibility to Arm-based CPUs, particularly the Windows on Snapdragon offerings from Qualcomm, like the brand-new Snapdragon 8cx. By working on bringing the underlying compatibility of Chromium to Windows-focused Arm64 processors, Microsoft is going to make it significantly easier for software developers to create applications that run on these devices. This would remove the last significant hurdle that has kept these devices from reaching mainstream buyers in the consumer and enterprise world, and it could turn them into serious contenders versus traditional X86-based CPUs from Intel and AMD.

On the device side, this move also opens up the possibility for a wider variety of form factors and for more ambient computing types of services. By essentially enabling a single, consistent target platform that could leverage the essential input characteristics of desktop devices (mice and keyboards), mobile devices (touch), and voice-based interfaces, Microsoft is laying the groundwork for a potentially fascinating computing future. Imagine, for example, a foldable multi-screen device that offers something like a traditional Android front screen, then unfolds to a larger Windows (or Android)-based device that can leverage the exact same applications and data, but with subtle UI enhancements optimized for each environment. Or, think about a variety of different connected smart screens that allow you to easily jump from device to device but still leverage the same applications. The possibilities are endless.

Strategically, the move is a fascinating one for Microsoft. On one hand, it suggests a closer tie to Google, much like the built-in support for Android-based phones did in the latest version of Windows 10. However, it’s specifically being done through open source, and is likely to leverage its recent Github developer resource purchase to make web standards more open and less specifically tied to Google. At the same time, because Apple doesn’t currently support Chromium and is still focused on keeping its developers (and end users) more tightly tied into its proprietary OS, Microsoft is essentially further isolating Apple from key web software standards. In an olive branch move to Apple users, however, Microsoft has said that they will bring the Chromium-powered version of Edge to MacOS and likely iOS, essentially giving Apple users access to this new world of software, but via a Microsoft connection.

In the end, a large number of pieces have to come together in order to make this web-based, platform-free version of the software world come to pass, and it wouldn’t be the least bit surprising to see roadblocks arise along the way. Still, Microsoft’s move to support Chromium could prove to be a watershed moment that quietly, but importantly, sets some key future technology trends into motion.

PC Users’ Smartphone-Envy

Millions of people rely on their smartphone every day for their on the go computing needs. For many, especially in younger demographics smartphones are their sole or main computing device. Whether it is email, social media or gaming consumers across the world have become highly dependent on their phones so much so that the whole tech industry has started to address screen addiction. Considering how much time users spend on these devices, we at Creative Strategies wanted to understand how smartphones fit in people’s workflows and to do so we conducted an online study at the end of November across 1000 US consumers.

The first interesting data point we found is that it 34% of our panel has both a work PC/Mac and a personal one while only 15% relies solely on a work PC for both work and personal computing needs. Forty-three percent of our panel only has a personal PC/Mac. This landscape is fascinating as it points at the opportunity there still is to reach consumers and not just IT departments when it comes to PC sales. While engagement on PCs might have dropped when smartphones first hit the market, it is clear that PCs and Macs still have a place in our homes.

That said, just the fact that 56% of users on our panel do not have a Windows PC with a touchscreen points to an installed base that is ready for an upgrade. This is even more obvious when we see that 61% of the PC users on our panel said their PC has no support for pen.

A Reality Check on How People Work

Before getting into what users want for their PCs, I think it is interesting to look at how they are currently using them as this will provide excellent insight into how to market their next upgrade. Among the people who are currently working 47% said they are usually working from their office desk and another 30% work at their home office desk making mobility not a high priority among our panelists.

Work and life balance is still a struggle for most as we seem to rely on our phones to keep us on top of things without being dragged into work more than necessary. And so 40% of our working panelists check their email, calendar and social media every morning before leaving for work. Twenty-seven percent keep an eye on things on their phone in the evening trying not to open their PC while 11% are always on their phone in the evening but uses their PCs to either binge watch or game. Only 17 percent of our working panelists never start or finish their working day at home which makes me feel somewhat better about my work/life struggle!

When it comes to how people work across devices, there was much less consensus than we had for where work takes place. Our working panelists are quite varied in their habits of working on one machine or multiple ones. Seventeen percent never work on multiple devices, 12% often start working on a work device to end on a personal one at home, 23% only use their work device while 25% pick up a phone or a tablet for a quick edit. Twenty-three percent work seamlessly across devices depending on convenience. What is interesting is that this number only grows to 29% among early tech adopters pointing to the fact that working across devices does not necessarily require tech expertise these days, especially when the multiple device mix includes a phone.

Top Asks from PC users

With the phone being the most used device by many people it is no surprise that there are features that users will want to see on their PCs too. This is not about being able to do the same things they do on their phones, but rather it is about benefitting from some critical enablers of the experience their phones can deliver. It was evident among our panelists that voice calls, messaging, and social media are best dealt on a phone than a PC.

So when we asked consumers which features their smartphone has that they would want to see on their PC the wishlist reflected all the key qualities of a smartphone. First on the list is long battery life (36%) Instant-on (29%) and cellular connectivity (25%).

Interestingly, the second highest feature was face-authentication at 30%. This reflects my previous comment that the PCs owned by our panelists seem to be on the older side and of course, it also demonstrates the lack of FaceID support on the Mac. Considering about 40% of our panelists had the newest Apple and Samsung’s smartphones which support face authentication this ask is not a big surprise and as more phone manufacturers embrace face authentication the need to support it on the PC/Mac will grow. For PC makers this is already an option as Windows supports Windows Hello, but it will be interesting to see what Google and Apple will do going forward.

Pain Points Are Not Always a Driver

In technology, I often find myself pushing service providers or hardware manufacturers to look at solving real-life problems to drive uptake of services or hardware refresh.

It is interesting how, when it comes to connectivity, consumers do not see it as a pain point, but they still want it. We asked our panelists how easy they feel it is to find an internet connection for their PC/Mac/Chromebook when they need it: 31% said it is very easy because they only use a computer at home on WiFi and 17% do the same at their office/campus. Twenty percent said it is not a problem as they mostly work from their desk where their computer is connected. Eighteen percent uses their smartphone as a hotspot and only 5% who are highly mobile users admit that finding connectivity is a constant challenge.

If this were the issue Always-on and Always-connected PCs aimed to solve it seems that the pitch to deliver the kind connectivity that only 6% of our panelists experience with their connected PCs would not lead to much of an uptake. Yet, it is clear to me from the fact that 25% said they would want a cellular connection that when we talk about connectivity, it is not a question of solving a problem but rather delivering a level of convenience we have got accustomed to with our phones.

If I am right, what PC makers, Qualcomm, and carriers will be able to offer in terms of plan activation and compelling data pricing will be the key to the success of the Always-on, Always-connected PC. Offering that convenience for a free trial period will get users to never want to give it up, setting the bar for what the next computing experience should be like.

The Connected PC

Sometimes it takes real world frustrations before you can really appreciate the advances that technology can bring. Such is the case with mobile broadband-equipped notebook PCs.

Before diving into the details of why I’m saying this, I have to admit upfront that I’ve been a skeptic of cellular-capable notebooks for a very long time. As a long-time observer of, data collector for, and prognosticator on the PC market, I clearly recall several failed attempts at trying to integrate cellular modems into PCs over the last 15 years or so. From the early days of 3G, and even into the first few years of 4G-capable devices, PC makers have been trying to add cellular connectivity into their devices. However, attach rates in most parts of the world (Western Europe being the sole exception) have been extremely low—typically, in the low single digits.

The primary reasons for this limited success have been cost—both for the modem and cellular services—as well as the ease and ubiquity of WiFi and personal hotspot functions integrated into our smartphones. Together, these factors have put the value of cellular connectivity into question. It’s often hard to justify the additional costs for integrated mobile broadband, especially when the essentially “free” alternatives seem acceptable.

Despite all these concerns, however, we’ve seen a great deal of fresh attention being paid to cellular connected PCs of late. Specifically, the launch of the always connected PC (ACPC) effort by Microsoft, Qualcomm, and several major PC OEMs (HP, Asus, and Lenovo) this time last year brought new attention to the category and started to shift the discussion of PC performance towards connectivity, in addition to traditional CPU-driven metrics. Since that first launch with Snapdragon 835-based devices, we’ve already seen second generation Snapdragon 850-based PCs, such as Lenovo’s Yoga C630, start to ship.

We’ve also seen Intel bring its own modems into the PC market in a big way over the last few months, highlighting the increased connectivity options they enable. In the new HP Spectre Folio leather-wrapped PC, for example, Intel created a multi-chip module that integrates its Amber Lake Y-Series CPU, along with an XMM 7560 Gigabit LTE modem. Conceptually, it’s similar to the chiplet-style design that combined an Intel CPU and AMD Radeon GPU into a single multi-chip module that Dell used in its XPS 15 earlier this year, but integrates a discrete modem instead of the discrete GPU.

Together these efforts, as well as expected advancements, highlight the many key technological enhancements in semiconductor design that are being directed towards connectivity in PCs. Plus, with the launch of 5G-capable modems and 5G-enabled PCs on the very near horizon, it’s clear that we’ll be enjoying even more of these chip design-driven benefits in the future.

Even more importantly, changes in the wireless landscape and our interactions with it are bringing a new sense of pertinence and criticality to our wireless connections. While we have been highly dependent on wireless connections in our PCs for some time, the degree of dependence has now grown to the point where most people really do need (and expect) reliable, high-quality signals all the time.

This point hit home recently after I had boarded a plane but needed to finish a few critical emails before we took off. Unfortunately, the availability and quality of WiFi connections while people are getting seated is dicey at best. But by leveraging the integrated cellular modem in my Spectre Folio review unit, I was able to do so no problem. Similarly, in a long Lyft ride to an airport on another recent trip, I leveraged the modem in the Yoga C630 for similar purposes. Plus, in situations like conferences and other events where WiFi connections are often spotty, having a cellular connectivity alternative can be the difference between having a usable connection and not having one at all.

Admittedly, these are first-world problems and not everybody needs to have reliable connectivity in these types of limited situations. In other words, I don’t think the extra cost of integrated cellular modems makes sense for everyone. But, for people who are on the run a lot, the extra convenience can really make a difference. This is another example of the fact that many of the technological advances that we now see in the PC market are generally more incremental and meant to improve certain situations or use cases. Integrated cellular connections are in line with this kind of thinking as they provide an incremental boost in the ability to find a usable internet connection.

In addition to convenience, the increase of WiFi network-based security risks has raised concerns about using public WiFi networks in certain environments. While not perfect, cellular connections are generally understood to be more secure and much less vulnerable to any kind of network snooping than WiFi, providing more peace of mind for critical or sensitive information.

Of course, little of this would matter if network operators didn’t make pricing plans for cellular data usage on PCs attractive. Thankfully, there have been improvements here as well, but there’s still a long way to go to truly make this part of the connected PC experience friction-free. The expected 2019 launch of 5G-equipped notebooks will likely trigger a fresh round of pricing options for connected PC data plans, so it will be interesting to see what happens then.

Ultimately, while some of the primary concerns around the connected PC remain, it’s also becoming clear that many other issues are starting to paint the technology in a light. Always on, always reliable connections are no longer just a “nice to have,” but a “need to have” for many people, and along with the technology advancements, increased security and lower data plan costs are combining to create an environment where connected PCs finally start to make sense.

Podcast: Amazon AWS reInvent, HP Inc. and Dell Earnings, Apple Music and Amazon

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing multiple announcements from Amazon’s AWS re:Invent conference, including the launch of several new custom chips, discussing the impact of HP’s and Dell’s earnings and what it means for the PC market, and chatting about the new agreement that will let Apple Music work on Amazon Echo devices.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Two Tech Trends That Failed to Save My Black Friday

I make no secret that I love to shop. I like buying things rather than the actual process of shopping. Before I get out the door, I know which stores I will go to and have a clear idea of what I need or want to buy. While my outing might still take a couple of hours, it is a targeted and organized operation, despite what my husband and kid might say.

As I take pleasure in buying not shopping, I do a fair amount online. Amazon is my friend, but so is a long list of websites that carry my favorite brands, accept Apple Pay or Paypal and offer free shipping.

When big sales days come around, like Black Friday, I prioritize online shopping, but I will go to some physical stores mostly for clothes and accessories. This year I found going to the local mall an excruciating process, more so than it usually is, and mostly because I saw all the different ways tech could have made it better but didn’t.

AI and Big Data

Let’s start with how stupid the shopping process still in. Both online and in-store there is little or no intelligence used by retailers to make your experience less painful and more rewarding for you and them.

This lack of intelligence starts from home where you are inundated by ads leading up to the big day that more often than not poorly reflect your buying habits. This is particularly ironic this year when both tech and politics have spent several months discussing privacy and how much information internet giants should have access to. Well, right now they, as well as most of the retailers I shop from, have access to a lot of information, but the targeted advertising I receive is still pretty dumb. The foot crumbs I leave as I go from site to site follow me with very generic suggestions but one has to wonder why those sites I trust, and I shop from more often do not have a profile for me.

Let’s take Amazon as an example of an online retailer as I shop with them consistently and I purchase a vast range of items for myself, my family, pets, and home. Amazon has the list of all my orders as well as everything I browsed and how many times I looked at an article without clicking “buy.” Why don’t I get an email with suggestions from the deal of the day that match my buying patterns? For instance, why don’t I get prompted for an item I looked at but did not buy? Chances are I did not do so based on price, so an offer might get me to finally purchase. Or again, why don’t I get offers on products upgrades? Say I bought a Ring doorbell two years ago and the latest model is now on sale. Why not send me offers for complementary products like what I might want to add to my smart home after buying several Echo products, a doorbell, and some bulbs?

Much of the same could be said for those brands I shop from on a regular basis and even more so those where I am part of a reward program. If I have a reward card in my digital wallet that pops up to tell me I am close to a specific store why doesn’t that retailer push it a step forward and send me relevant offers on what is available in store? Why aren’t traditional apparel retailer offering an in-store version of stitch-fix where based on your previous purchases and your body type they put together a number of outfits that on a specific date and time will be ready for you in a store changing room. You would walk in straight to the changing room you were notified on your phone as you entered the store, you try everything on, tap the RIFD tag of what you want to keep before you put it in a bag and walk out while your credit card is automatically charged. I give you that such a system might not be viable on a heavy traffic day, but hard to believe it would not work any other day.

I understand that much of the sales occurring on Black Friday end up being for items you had not planned to buy, but intelligent shopping does not mean that impulse shopping must die. It would actually mean you end up being more exposed to items you are likely to respond positively to resulting in more revenue for the retailer but also a much higher satisfaction on your part. At the end of the day, there isn’t much that is more satisfying than a successful shopping spree.

Mobility

Black Friday is such a big shopping day that stores have been opening earlier and earlier with some stores now starting their sales on Thanksgiving Day. I have gotten up earlier in the past mostly to avoid the crowds, but this year was not one of those times, and I had to make three attempts to reach the mall. That’s right only on Sunday I was able to get to the parking structure of the Westfield Mall in Santa Clara and park my car!! The first two times it was impossible to even get to the parking structure due to the high volume of cars.

So this begs the question: where were Lyft and Uber? In the land that invented ride-share and scooters, it seems to me that talking about the death of cars ownership was immensely premature. I understand of course that scooters might not be the safest choice when you are holding shopping bags, but why are people not relying on Uber and Lyft to avoid the pain of parking? I would guess that a lot has to do with how many of these malls treat rideshare services as second-class transportation providers and relegate them to drop off and pick up from locations that are less than ideal for both passengers and drivers. In my case a Lyft driver would either get stuck in the same traffic I was trapped in for over an hour or would have to drop me off miles away from the entrance.

Why are malls not keeping pace with what their customers want and offer preferential lanes and temporary parking spots for rideshare companies? Airports have adapted to this and while some airports still make you walk miles to get to a ride-share pickup location things are changing fairly quickly. Malls should learn from it especially in the US where parking is free. I can see other countries like the UK, where most shopping centers charge a fair bit to park, resisting such change as it would result in a loss of revenue.

 

What ruined my Black Friday could have been solved by technology today, not in some distant future. As I have often said though, technology might be ready, but business models and humans are not. Data and AI have the power to make my shopping much more tailored to my needs and ultimately more effective. This coupled with a pain-free rideshare trip to and from my favorite stores could have delivered a “shopping like a star”experience. But if the big parking structure that is being built next to the mall is a good indication of how quickly things will change I am sad to say it will be a while before retail catches up with what technology can already enable.

Robots Ready to Move Mainstream

Are the robots coming, or are they already here? Fresh off the impressive, successful Mars landing of NASA’s InSight lander robotic spacecraft, it seems appropriate to suggest that robots have already begun to make their presence felt across many aspects of our lives. Not only in space exploration and science, but as we enter into the holiday shopping season, their presence is being felt in industry and commerce as well.

Behind the scenes at factories building many of the products in demand this holiday season, to the warehouses that store and ship them out, robots have been making a significant impact for quite some time. Building on that success, both Nvidia and Amazon recently made announcements about robotics-related offerings intended to further advancements in industrial robots.

Just outside of Shanghai last week, at the company’s GTC China event, Nvidia announced that Chinese e-commerce giants JD.com and Meituan have both chosen to use the company’s Jetson AGX Xavier robotics platform for the development of next-generation autonomous delivery robots. Given the expected growth in online shopping in China, both e-commerce companies are looking to develop a line of small autonomous machines that can be used to deliver goods directly to consumers, and they intend to use Xavier and its associated JetPack SDK to do so.

At the company’s AWS:Invent event in Las Vegas this week, Amazon launched a cloud-based robotics test and development platform called AWS RoboMaker that it’s making available through its Amazon Web Services cloud computing offering. Designed for everyone from robotics students who compete in FIRST competitions through robotics professionals working at large corporations, RoboMaker is an open-source tool that leverages and extends the popular Robot Operating System (ROS).

Like some of Nvidia’s software offerings, RoboMaker is designed to ease the process of programming robots to perform sophisticated actions that leverage computer vision, speech recognition, and other AI-driven technologies. In the case of RoboMaker, those services are provided via a connection to Amazon’s cloud computing services. RoboMaker also offers the ability to manage large fleets of robots working together in industrial environments or places like large warehouses (hmm…wonder why?!)

The signs of growing robotic influence have been evident for a while in the consumer market as well. The success of Roomba robotic vacuums, for example, is widely heralded as the first step in a home robotics revolution. Plus, with the improvements that have occurred in critical technologies such as voice recognition, computer vision, AI, and sensors, we’re clearly on the cusp of what are likely to be some major consumer-focused robotics introductions in 2019. Indeed, Amazon is heavily rumored to be working on some type of home robot project—likely leveraging their Alexa work—that’s expected to be introduced sometime next year.

Robotics is also a key part of the recent renaissance in STEM education programs, as it allows kids of many ages to see the fun, tangible efforts of their science, math, and engineering-related skills brought to life. From the high-school level FIRST robotics competitions, down to early grade school level programs, future robotics engineers are being trained via these types of activities every day in schools around the world.

The influence of these robotics programs and the related maker movement developments have reached into the mainstream as well. I was pleasantly surprised to see a Raspberry Pi development board and other robotics-related educational toys in stock and on sale at, of all places, my local Target over the Black Friday shopping weekend.

The impact of robots certainly isn’t new in either the consumer or business world. However, except for a few instances, real-world interactions with them have still been limited for most people. Clearly, that’s about to change, and people (and companies) are going to have to be ready to adapt. Like the AI technologies that underlie a lot of the most recent robotics developments, there are some great opportunities, but also some serious concerns, particularly around job replacement, that more advanced robotics will bring with them. The challenge moving forward will be determining how to best use robots and robotic technology in ways that can improve the human experience.

Podcast: Dell Analyst Summit, Citrix-Sapho, Nvidia Earnings, Dolby and Microsoft Headphones

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Dell’s Analyst Summit, the Citrix analyst event and their purchase of Sapho, Nvidia’s recent earnings, and the release of new noise-cancelling headphones from both Dolby and Microsoft.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Amazon’s HQ2 Search Has Catalyzed An Important Thought Process

In the end, Amazon’s announcement of Queens and Crystal City as the two new HQ locations was anti-climactic. In part because of the decision to split HQ2 after a year of breathless media coverage, and in part because many believe that the D.C. area choice was already a foregone conclusion given Jeff Bezos’ already substantial ties to the region. Cynics believe that the choice was typically Amazonian – where the company could get the best deal, rather than it necessarily being the best location for the company or, heaven forbid, embracing something slightly more altruistic by helping an up-and-coming tech city like Pittsburgh or Atlanta move to the top tier.

More importantly, this highly public process prompted a year-long thought process of how cities must compete for business and talent in the 21st century economy. The Bay area and Seattle are already overheated, and the infrastructure (affordable housing, transport) to support much more is inadequate. In my hometown of Boston, which was one of the finalists, there was a collective feeling of relief at having not been selected, given already sky-high housing prices, clogged roads, and an overburdened public transport system. And it is both sad and a poor reflection on our current leadership when it takes the prospects of an Olympics or major new corporate headquarters to catalyze the type of strategic thinking and investment for any city that wants to be competitive in the 21st century economy.

In my view, there are five key elements necessary to be in the game:

  • Talent. Both extant and potential via a good educational system and strong universities.
  • Diversity of economy. You don’t want to be too dependent on any one industry or economic sector. Pittsburgh is a great example. Whereas the collapse of the steel industry nearly killed Pittsburgh 1.0, Pittsburgh 2.0 has a much greater diversity of vibrant industries, fueled by a unique level of cooperation among its private, public, and educational sectors.
  • Infrastructure. An adequate road system and a viable public transportation system. It’s becoming clear that 21st century workers don’t want to spend their lives sitting in cars. For certain types of employers/employees, proximity of a decent airport is also a factor.
  • Affordable Housing & Livability. If you’re earning $100,000 and can’t afford a pleasant one-or two- bedroom apartment/condo in the city or a modest home in a close-in suburb, it’s a problem. There’s also the slightly more amorphous concept of ‘livability’, such as a city’s walkability, and the presence/proximity of culture and other amenities. Something I’ve always thought is important is ‘what’s a tank away’?, in other words are there nice places you can easily get to for a day trip or a weekend (beach, mountains, etc.).
  • Progressive Local Leadership. Given the dysfunction on the national level and lack of strategic, long-term investment in education and infrastructure, cities and states with strong local leadership are breaking through. Examples: Nashville, Tulsa, and Los Angeles.

Now, let’s take a look at a few of the cities that were not only finalists in the Amazon hunt, but would be viable contenders at least for the ‘next Amazon’. How do they stack up on the above criteria?

Boston. Educational institutions, diversity of economy, livability, and talent are its strengths. But the city’s housing prices have become Bay-area-esque, and its infrastructure is overburdened and crumbling, with no long-term plan in place. It’s like a city that’s over-touristed and is saying, ‘no more’.

Dallas. Yes, it’s economy is on fire, but my sense is that this is a place that people move to once their career is established rather than being a preferred location for younger talent. And while it’s affordable compared to a lot of other cities, Dallas lacks top tier educational institutions that are feeders to tech companies, and remains too auto-centric, despite some recent investments in public transport.

Austin. This city has a lot of the right ingredients in place and has attracted a lot of tech companies already. A much younger, more vibrant feel than Dallas, in part because of the giant University of Texas at Austin. But growth has outpaced infrastructure investment, with sprawl and traffic impacting the quality of life factors that made this city attractive after Dell helped put it on the map.

Atlanta. Has many of the same attributes and challenges as Dallas, but is a notch above in terms of top tier educational institutions. I think traffic/sprawl/infrastructure challenges are what kept it out of the running for Amazon.

Pittsburgh. Here’s a city that has done and is doing a lot of things right to become a 2.0 version of itself. A quite livable place. Not yet a major league city on a global scale, and needs to substantially invest in its transport infrastructure if current growth trajectory continues.

Nashville. Has a lot of the same ingredients as Pittsburgh, and has used its assets to become a major healthcare/tech center. A progressive mayor has courted companies and made the right investments and strategic decisions to make the city much more livable (new park & bike trails, better roads/transport, tons of new housing).

Los Angeles. The highly progressive mayor Eric Garcetti is making huge investments in infrastructure and affordable housing and doing real things to address the homeless issue, tackle inequality, and diversify the economy. This fascinating, diverse place has the potential to be a revitalized global city for the 21st Century…or it could get crushed under the weight of its size and years of under-investment.

I should also mention three Canadian cities that are already seriously on the map:

Toronto. Now North America’s third largest city, Toronto (and the tech epicenter of nearby Waterloo) was a contender for Amazon HQ. And not just because of the idea of ‘Trump Snub’ that the media loved writing about. This city is culturally and economically diverse, has strong educational institutions, and is very livable. It does suffer from U.S.-like problems of traffic and sprawl and inadequate rapid transit outside the city core. And housing prices are among the highest in North America. But you will be hearing more about Toronto in the coming years.

Vancouver. Incredible quality of life, if you can get past the Seattle-esque six months of gloominess. This will be a major 21st century city, given its setting, strong educational institutions, and diversity. The huge run up in real estate prices (mainly due to foreign investment) and lack of good rapid transit are challenges…that are actually being addressed. V

Montreal. This city has all the right ingredients: already a tech and creative hub, strong educational institutions and tons of talent, still relatively affordable, and a high quality of life. Montreal is also making a significant investment in improving its roads and expanding its transport system. Its brutal winters are a factor for some, and still restrictive language laws keep some companies (and people) away.

And finally, here’s a few more from among the major North America cities:

Getting to the Next Stage: Minneapolis-St. Paul, Portland (OR), Phoenix, Detroit, Philadelphia.

Not Progressing/Worry Button: Chicago, Miami, Orlando, Charlotte, Baltimore…and San Francisco.

Cities shouldn’t be overdoing the post-mortem about why they didn’t get Amazon HQ2. Instead, they should be thinking about what’s needed for them to land the ‘next HQ’.

Dolby Brings a New Dimension to Home Entertainment

Consumers are very familiar with the Dolby brand. Whether you often visit a Dolby Cinema or you have a TV or computer that supports Dolby Atmos and Dolby Vision you know Dolby delivers one of the best entertainment experiences that allow you to lose yourself in the content you are consuming.

At a time when delivering an experience has more and more to do with the combination of hardware, software and AI, Dolby brings to market its first consumer device: a set of wireless headphones called Dolby Dimension.

Making Hardware Does not Make You a Hardware Maker

It is always easy when a brand brings to market a product in a category they had not been present before to think of it as “entering the market.” Of course, technically this is what they are doing. But there are different reasons why a brand decides to get into a new space. Potential revenue is mostly at the core of such a move, but even then how that revenue is generated differs. Sometimes revenue comes directly from the new product. Other times, the revenue upside comes from how the product is able to boost brand perception in areas the name was already present.

When Dolby spoke to me about Dolby Dimension, I thought about how well it fits their history and DNA as well as delivering on a market need. To understand why Dolby is taking this step one should take a quick look at how home entertainment is changing.

In a recent study across 2044 consumers by the Hollywood Reporter it is clear that in the US, binge-watching is becoming the norm and not just for millennials. While 76% of TV watchers aged 18 to 29 said, they preferred bingeing, older age brackets are not far behind with 65% of viewers ages 30 to 44, and 50% of 44 to 54 who prefer binging. And it is not just about how many people binge-watch it is also how often they do so. Across the national sample of the October study, 15% say that they binge-watch on a daily basis. Another 28% say they binge several times per week.

Many will argue that the wireless headphones market is already super competitive and that Bose fully controls the high-end of the market, so Dolby should have thought about it twice before entering this space. But see, this is where the “entering this space” debate starts. From how I look at it, Dolby was looking to expand the way their technology and experience can be experienced. This took the form of a set of headphones that bring value to a specific set of consumers who appreciate high-quality sound, spend hours watching content on whatever screen is most convenient in their home and see the $599 price tag as an investment in a superior experience that allows them to binge smarter.

It is when you look at the technology packed inside Dolby Dimension and the specific use cases that Dolby has in mind that you understand why this is not a simple branding exercise. The initial limited availability to the US market and distribution focused on dolby.com confirm to me that Dolby is not interested in a broader consumer hardware play, which I am sure will leave hardware partners to exhale a sigh of relief.

Not Just Another Set of Wireless Headphones

Most wireless headphones are designed today for users on the go. They help you being immersed in your content or your work by isolating you from the world around you thanks to noise canceling.

There are some models in the market, the latest one being the Surface Headphones, that allow you to adjust your voice canceling feature to let some of the world in if you need to. This is however done manually.

Dolby Dimension is designed with home use in mind which means that a few things are different. First, the new Dolby LifeMix technology allows you to dynamically adjust how much of the outside world you can let it. Settings, activated through touch controls, will enable you to find what Dolby calls the “perfect blend” between your entertainment and your world as well as entirely shutting down the outside world through Active Noise Cancelling. If you, like me, binge-watch in bed at night you might appreciate being able to choose between being fully immersed in your content when your other half falls asleep before you and snoring gets in the way. Other times, you might want to be able to hear your daughter giggling away next door because she decided to ignore your multiple lights off requests!

Over the days I had to play with Dolby Dimension what most impressed is how it really gives you the feeling of sitting in a theatre. This is especially striking when you are watching content on a small screen like a phone or a tablet. The sound, which of course Dolby will tell you is half the experience, really brings that little screen to life letting you enjoy content at its best. I felt so immersed in what I was watching that I am pretty sure I got to experience the kind of “mom’s voice canceling” my kid has naturally built into her when she is watching any of the Avengers movies, or she is gaming!

There are a few more details that highlight what Dolby had in mind with these headphones. Dolby Dimension can be paired with up to eight devices, and you can quickly toggle between your favorite three with dedicated hardware keys on the right ear cup. When you pick your device, hitting the key associated to it will take you straight to your entertainment app of choice like Netflix or Hulu, not just your device.

Battery life reflects a best-sound approach by delivering up to 10 hours with LifeMix and Virtualization turned on and up to 15 hours with low power mode. So whether you, like 28% of the study sample, binge-watch two to three episode per session or like another 21% you watch four episodes at once you will be left with plenty of power. While we might be tempted to think about a long flight or a day at the office, this is not what Dolby Dimension was designed for and to be honest if those are your primary use cases Dolby Dimension is not really for you.

Headphones are Key to the Experience

It is fascinating how over the past year, or so, headphones have become a talking point in tech. I think the last time that was the case was when Bluetooth was introduced and we got excited about being able to have a conversation on the phone without holding the phone.

When we are discussing the lack of the audio jack from our devices or which digital assistant is supported (assistant that you can summon with Dolby Dimension) we are pointing to the fact that headphones have become an essential part of our experience. Considering how much time we spend in front of one screen or another, both at home or on the go, being able to enjoy both visual and audio content is growing in importance. As intelligence gets embedded in more and more devices and smaller and smaller devices benefit from higher processing power, headphones can become a device in their own right rather than being viewed merely as an accessory.

While I don’t believe Dolby is interested in becoming a consumer hardware company, I am convinced they will continue to innovate and look at how consumers habits are changing when it comes to consuming content. As we move from physical screens to augmented reality experiences and eventually virtual ones, Dolby might continue to take us on a sensory journey through technology and if needed hardware.

Chiplets to Drive the Future of Semis

A classic way for engineers to solve a particularly vexing technical problem is to move things in a completely different direction—typically by “thinking outside the box.” Such is the case with challenges facing the semiconductor industry. With the laws of physics quickly closing in on them, the traditional brute force means of maintaining Moore’s Law, by shrinking the size of transistors, is quickly coming to an end. Whether things stall at the current 7nm (nanometer) size, drop down to 5nm, or at best, reach 4nm, the reality of a nearly insurmountable wall is fast approaching today’s leading vendors.

As a result, semiconductor companies are having to develop different ways to keep the essential performance progress they need moving in a positive direction. One of the most compelling ideas, chiplets, isn’t a terribly new one, but it’s being deployed in interesting new ways. Chiplets are key IP blocks taken from a more complete chip design that are broken out on their own and then connected together with clever new packaging and interconnect technologies. Basically, it’s a new version of an SoC (system on chip), which combined various pieces of independent silicon onto a multi-chip module (MCM) to provide a complete solution.

So, for example, a modern CPU typically includes the main compute engine, a memory controller for connecting to main system memory, an I/O hub for talking to other peripherals, and several other different elements. In the world of chiplets, some of these elements can be broken back out into separate parts (essentially reversing the integration trend that has fueled semiconductor advances for such a long time), optimized for their own best performance (and for their own best manufacturing node size), and then connected back together in Lego block-type fashion.

While that may seem a bit counter-intuitive compared to typical semiconductor industry trends, chiplet designs help address several issues that have arisen as a result of traditional advances. First, while integration of multiple components into a single chip arguably makes things simpler, the truth is that today’s chips have become both enormously complex and quite large as a result. Ensuring high-quality, defect-free manufacturing of these large, complex chips—especially while you’re trying to reduce transistor size at the same time—has proven to be an overwhelming challenge. That’s one of the key reasons why we’ve seen delays or even cancellations of moves to current 10nm and 7nm production from many major chip foundries.

Second, it turns out not every type of chip element actually benefits from smaller sizes. The basic argument for shrinking transistors is to reduce costs, reduce power consumption, and improve performance. With elements like the analog circuitry in I/O components, however, it turns out there’s a point of diminishing returns where smaller transistors are actually more expensive and don’t get the performance benefits you might expect from smaller production geometries. As a result, it just doesn’t make sense to try and move current monolithic chip designs to these smaller sizes.

Finally, some of the more interesting advancements in the semiconductor world are now occurring in interconnect and packaging technologies. From the 3D stacking of components being used to increase the capacity of flash memory chips, to the high-speed interfaces being developed to enable both high-speed on-chip and chip-to-chip communications, the need to keep all the critical components of a chip design at the same process level are simply going away. Instead, companies are focusing on creating clever new ways to interconnect IP blocks/components in order to achieve the performance enhancements they used to only be able to get through traditional Moore’s Law transistor shrinks.

AMD, for example, has made its Infinity Fabric interconnect technology a critical part of its Zen CPU designs, and at last week’s 7nm event, the company highlighted how they’ve extended it to their new data center-focused CPUs and new GPUs now as well. The next generation Epyc server CPU, codenamed “Rome,” scheduled for release in 2019, leverages up to 8 separate Zen2-based CPU chiplets interconnected over their latest generation Infinity Fabric to provide 64 cores in a single SoC. The result, they claim, is performance in a single socket server that can beat Intel’s current best two-socket server CPU configuration.

In addition, AMD highlighted how its new 7nm data center-focused Radeon Instinct GPU designs can now also be connected over Infinity Fabric both for GPU-to-GPU connections as well as for faster CPU-to-GPU connections (similar to Nvidia’s existing NVLink protocol), which could prove to be very important for advanced workloads like AI training, supercomputing, and more.

Interestingly, AMD and Intel worked together on a combined CPU/GPU part earlier this year that leveraged a slightly different interconnect technology but allowed them to put an Intel CPU together with a discrete AMD Radeon GPU (for high-powered PCs like the Dell XPS15 and HP 15” Spectre X360) onto a single chip.

Semiconductor IP creator Arm has been enabling an architecture for chiplet-like mobile SoC designs with its CCI (Cache Coherent Interconnect) technology for several years now. In fact, companies like Apple and Qualcomm use that type of technology for their A-Series and Snapdragon series chips, respectively.

Intel, for its part, is also planning to leverage chiplet technology for future designs. Though specific details are still to come, the company has discussed not just CPU-to-CPU connections, but also being able to integrate high-speed links with other chip IP blocks, such as Nervana AI accelerators, FPGAs and more.

In fact, the whole future of semiconductor design could be revolutionized by standardized, high-speed interconnections among various different chip components (each of which may be produced with different transistor sizes). Imagine, for example, the possibility of more specialized accelerators being developed by small innovative semiconductor companies for a variety of different applications and then integrated into final system designs that incorporate the main CPUs or GPUs from larger players, like Intel, AMD, or Nvidia.

Unfortunately, right now, a single industry standard for chiplet interconnect doesn’t exist—in the near term we may see individual companies choose to license their specific implementations to specific partners—but there’s likely to be pressure to create that standard in the future. There are several tech standards for chip-related interconnect, including CCIX (Cache Coherent Interconnect for Accelerators), which builds on the PCIe 4.0 standard, and the system-level Gen-Z standard, but nothing that all the critical players in the semiconductor ecosystem have completely embraced. In addition, standards need to be developed as to how different chiplets can be pieced together and manufactured in a consistent way.

Exactly how the advancements in chiplets and associated technologies relate to the ability to maintain traditional Moore’s law metrics isn’t entirely clear right now, but what is clear is that the semiconductor industry isn’t letting potential roadblocks stop it from making important new advances that will keep the tech industry evolving for some time to come.

Podcast: Samsung Developer Conference, AMD 7nm, Google Policies

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Samsung’s Developer Conference and the announcements around their Bixby assistant platform and Infinity Flex foldable smartphone display, AMD’s unveiling of their 7nm Epyc CPU and Instinct GPU for the cloud and datacenter market, and Google’s recent internal policy changes on harassment and other issues.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Automotive Tech Now Focused on Safety

After years of hype and inflated expectations, it’s clear that the mania around fully autonomous cars has cooled. In a refreshing, and much needed change, we’re starting to see companies like Nvidia, Intel/Mobileye, and Arm now talk much more about the opportunities to enable enhanced safety for occupants in cars supporting advanced technologies.

It’s not that the tech industry is giving up on autonomy—as recent announcements about new rounds of trials from Lyft, Uber, and others, as well as advanced new chip designs clearly illustrate—but the timeframes for commercial availability of these advancements are starting to get pushed out to more realistic mid-2020 or so dates. Even more importantly, the messaging coming from critical component players is shifting away from roads packed with Level 5 fully autonomous cars within a few years, to ways that consumers can feel more comfortable with and safer in semi-autonomous cars.

Over the last few weeks, Nvidia, Intel, and Arm have all discussed research reports and technology advancements in the automotive market that are focused primarily on security, with the technology providing a supporting role. Nvidia, for example, released a comprehensive study called “The Self-Driving Safety Report” that provides a view into how the company incorporates safety-related technology and thinking into all aspects of its automotive product developments. The report covers everything from AI-based design, to data collection and analysis, to simulation and testing tools, all within a context of safety-focused concerns.

Intel, for their part, released a comprehensive study on what they termed the Passenger Economy this past summer, but recently touted some findings from the report that focus on the relatively slow consumer acceptance for self-driving cars due to concerns around safety. Essentially, while 21% of US consumers say they’re ready for an autonomous car now, it’s going to be 50 years before 63% consumers believe they become the norm. To address some of these concerns, Intel is touting its Responsibility-Sensitive Safety (RSS) model, which it describes as a mathematical model for autonomous vehicle safety. The idea for RSS is to develop a set of industry standards for safety that can then be used to reassure consumers in a transparent way about how autonomous cars will function. Recently, Intel announced that Baidu had chosen to adopt the RSS model for its autonomous driving efforts in China.

Back in late September, Arm announced a new program called Safety Ready that ties together a number of the company’s security and safety technologies into a unified structure that, while not limited to the automotive market, is very well-suited for it. Safety Ready incorporates both chip IP designs and software that are focused on applications where functional safety is critical and allows the company to meet the key automotive-related functional safety certifications, including ISO 26262 and ASIL-D. At the same time, the company also introduced a new automotive-specific chip design called the Cortex-A76AE that integrates a capability called Split-Lock that allows a dual-core CPU to either function as two individual components doing separate tasks or two single components running in lockstep, where one can take over immediately if the other fails. As in many automotive applications, redundancy of functions is key for safety concerns and the Split-Lock capability of this new design brings it to digital components as well.

While it may seem that all these announcements are a somewhat dramatic shift from how the tech industry had been talking about autonomous cars, in reality they are simply part of a maturing perspective on how this market will develop. In addition, they’re based on some practical realities that many in the autonomous automotive industry have started to recognize. First, as research has continued to show, most consumers are still very leery of autonomous car features and aren’t ready to trust cars that take over too much control.

Second, the level of difficulty and technical challenge in getting even basic autonomy features to work in a completely safe, reliable way is now recognized as being even harder than it was first believed. Even semi-autonomous cars integrate an extraordinarily complex combination of advanced technologies that includes AI and machine learning, advanced sensors and fusing of sensor data, and intelligent mapping, all of which have to work together seamlessly to ensure a safe, high-quality driving experience. There’s no doubt that we will start to get there, but for now, it’s reassuring to see companies focus on the critical safety enhancements that assisted driving features can bring, as we look further out to the world of full autonomy.

Buying New Tech Before the End of the Year

If you are keeping up with the news, you know that there is a trade battle going on between the US and China. Our president has already placed significant tariffs on many products imported from China and is looking at adding another $250 billion in tariff’s that would cover just about all products coming from China. While talks continue to go on between China and US, with trade officials trying to avert these new round of tariffs, many of my the sources in Washington tell me that they believe it is inevitable that President Trump will enforce these new tariffs after the first of the year.

To date, most, if not all of the major tech companies, have had their lobbying arms trying to get the President to back off these tariff threats and to try to find a diplomatic resolution to this trade problem. However, many in Washington are doubtful that China will give in to the US trade demands and are now starting to work out how new tariffs would impact them shortly.

Most tech companies are now doing some significant long-term planning to try and find ways to avoid paying these tariffs by looking at moving some of the final test and assembly to other countries like Viet Nam, Malaysia or India. They would then ship these products from there, thus avoiding any Chinese tariffs. However, since so much of tech is made in China and will be shipped from there, it would be difficult for the majority of companies to employ this tactic to avoid paying what may be as much as a 25% tariff on goods shipped directly from China.

The economists I have talked to about the impact these tariffs would have on PC’s and Laptop prices say that the worst-case scenario is that it would add a full 25% to the final consumer price of a laptop or PC shipped under these new tariffs. In this case, PC vendors would pass all of the tariffs onto the customer to pay this increase.

A best-case scenario is that the PC or laptop companies eat some of the profit margins and take some of the tariff burdens from the customer and could pass on half or a portion the cost of the tariff to the final buyer.
In either case, after new tariff’s become law, it is very likely that laptops and PC’s will have higher prices. That is why, if you are in the market for a PC or laptop, it would be wise to consider buying them before these tariffs go into effect.

From an industry standpoint, any new tariffs could not have come at a worse time. For the last five years, the demand for PC’s have steadily declined and only this year have we seen a slight uptick in PC and laptop demand. Even more interesting, the growth has not come in the low end of the PC and laptop market where margins could be as low as 3%. The area of PC and laptop growth have been in the $799-$999 range, and we have even seen strong sales for PC’s and Laptops in the $1100 to $1500 price range too.

While margins are better for products in these price ranges, how the PC vendors deal with their pricing due to these tariffs is not clear. As I stated above, they could eat some of the margins to offset price rises from the tariff’s, but some of the tariff costs will be passed on to the customer if they want to remain profitable.

However, the tariff impacts on the tech companies today is not the biggest problem they will have due to other trade issues with China in the future.
That will come from the initiative in China that wants only products made in China sold to the Chinese public by 2025. Called the Made in China 2025 policy, China’s current leaders are moving the country to be independent of products and services made anywhere but in China.
While 100% of the products and goods China needs can never come from or be made in China, they are working hard to get as much created and manufactured in China by 2025 as possible.

For example, China is the largest market for US Soybeans. They have a plan in place to spend billions on soybean farming in various areas of China and by 2025, plan to be 100% self-dependent for their soybean needs. China has already put tariffs on US Soybeans, and by 2025, they plan to make no purchases of US soybeans at all.

While Trump has tried to get more US Companies to manufacture in the US, many of the tech companies are instead expanding their mfg for the Chinese market in general and then trying to find ways around the tariffs by pushing final test and assembly out of China. One PC maker told me that should they even want to manufacture in the US, cost of labor and increased real estate and manufacturing costs would add at a minimum 25-30% to the final price of their PC’s or laptops. So even with paying the tariff’s now (which they hope will be a short-term issue), it would not make that much difference to bring that manufacturing back to the US.

As I look at the current crop of mid to high-end laptops and PC’s, it is clear that you can get a lot of technology still at reasonable prices now. But once the new tariffs kick in, if your PC maker has not found a way to get around these tariffs, prepare to pay higher prices for that special desktop or laptop you can get today at reasonable prices now.

Podcast: Q3 2018 Tech Earnings Analysis and Outlook

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the major tech earnings from this week including Amazon, Google/Alphabet, Intel, Microsoft and others, and analyzing how the overall tech market is evolving.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

New Phones Helping with Old Phones Addiction

As we are reaching the end of the year, we will soon see coverage summarizing the tech trends of 2018. When it comes to smartphones, no doubt, edge-to-edge displays, AI, ML, cameras will all make the top ten list and so will phone addiction.

Over the past 12 months we have seen vendors playing in the smartphone market, both at an OS or hardware level, come up with solutions to help us monitor and manage the time we spend on these devices. More recently, we have seen a couple of vendors pitch smaller form-factors with a more limited set of features. These new phones are positioned as a companion to your main smartphone for those times when you want to disconnect a little and be more in the moment.

As a concept, this is not at all different from what we experienced at the start of the smartphone market when people would still have a featurephone to use over the weekend. What is different though is the reason behind the trend. Back then smartphones were really more of a work thing, which meant having them with you all the time meant you were on email all the time and therefore on the clock 24/7. Getting that featurephone for the weekend was much more about taking a break from work than taking a break from technology. The promise these new devices are making is to allow you to free yourself from the grip of your smartphone which consumes too much of your time. But is it as simple as wanting to spend time on these screens?

Phones Do So Much, How Can We not Love Them?

I am not going to argue the relationship with our phones is a healthy one, but I will argue most of us do not really want any help. The point is that these phones do a lot for us today. They have come to replace so many other devices in our lives that it makes sense we spend more time on them: listening to music, taking pictures, watching TV shows…We can also do many of the tasks we used to do only on a PC: search, shop, email, game… So our time staring at the little screen has grown.

For many people, smartphones have become a productivity as well as an entertainment center. I took a quick look at my usage over the past seven days, and apparently, I picked up my phone on average 94 times, I receive on average 240 notification per day and my usage when I am not traveling is at least 40% lower than when I am away from a computer. Some of these notifications are from the doorbell or our home security cameras so not something that necessarily requires interaction on my part, but something that asks for attention nevertheless. When we dig a little deeper, however, it is fascinating to see how much of what we can do with the phone today we used to do not just with a different device but with a non-tech tool altogether. Over the past seven days, for instance, I spent 21 minutes using my phone as a wallet, 38 minutes using it as a camera, I also used it for one hour to navigate my way around the world, 39 minutes talking to someone over Facetime, 21 minutes filing my expenses, 19 minutes booking flights, 15 minutes shopping on Amazon and one hour and 52 minutes doing email. I don’t really feel sorry about any of this as I see the phone just as a tool consolidation effort. Had I done all those things with different devices there would have been no reason to say I was addicted to something.

We Do What We Want Not What We Should

The problem starts, at least for me, when I see that over the past week I spent over 8 hours on social media. Now part of it is work, part of it is information, but truth be told a lot of it is boredom. For me, and many others, the smartphone is no different to the TV. We used to watch TV or have the TV on to fill our time even when nothing was interesting to watch. The smartphone is like having access to that TV whenever and wherever you want to tune into the reality show that is social media or gaming or anything else that helps you fill a void.

This is where the addiction is, in that filler role that smartphones play. And it is hard to let go without experiencing some level of FOMO. Like any addiction, self-discipline is not always enough. I know too much chocolate is bad for me, and I can try and limit myself to one square, but it is so much easier when I do not buy any at all. This is the premise of these new phones like the Palm. They are the equivalent of you not buying the chocolate. One could set up limits for all the apps and tools available on the phone through one of the new features like Apple’s Screen Time. But that would be the equivalent of limiting yourself to one square knowing you have a whole bar of chocolate in the cupboard.

It might be sufficient for some, as long as you first admit you have a problem of course. But what happens when you get back to your main smartphones? Are you going to binge use it to make up for the lost time?

Smartwatches Give Me All the Help I Need

This is why learning to control your use, in my view, will have more long-term effects and in my case, smartwatches have really helped. Yes, I know given the numbers I just shared you are scared to think what my usage was like before!

Smartwatches allow me to take a break from my phone by preventing me from being sucked into the phone for longer than I need to be. Continuing the chocolate metaphor wearables are the equivalent of someone breaking up a square and giving it to me while hiding the rest of the chocolate.

All those notifications I receive are not always essential. The important ones like a text from my daughter, a call from my colleague or a flight change will come to my wrist, but an Instagram like, a Facebook post or a non-work email will not. Being able to prioritize what is time sensitive and what is not is a great help in cutting back on the number of times you unlock your phone. Getting the urgent stuff to your wrist also makes sure you do not live in eternal fear of missing something, which leads to picking up the phone more often than you need.

Of course, smartwatches are not a magic wand. Users need to spend some time deciding what they want to prioritize so that the watch does not duplicate the phone. I might be wrong, but using a smartwatch to tame phone usage requires an investment in understanding where the problem is. With the simpler phone, the risk of changing behavior to fit the new device is as strong as relapsing on the smartphone that is still in my pocket. In other words, if I switch my social media time to texting time because my simpler phone does not support apps, I am just changing my addiction not controlling it.

Oracle Makes Case as Cloud Computing Provider

Sometimes it’s good to be late. That’s the argument we’ve heard from quite a few technology companies that are tardy to a particular market. They claim that their “lateness” isn’t really a detriment, but actually a positive attribute to their offering.

That’s certainly the case that Oracle is making when it comes to their cloud computing product—Oracle Cloud, which ties together Oracle Cloud Infrastructure, or OCI, with their various cloud, database and application platforms. Right now, Oracle is clearly battling it out for number four with IBM and SAP, well behind Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform, regardless of which market research data provider you choose to believe. Of course, with the group creating OCI primarily based in Seattle, it certainly doesn’t hurt (nor is it terribly surprising) that 90-95% of them are alumni from the Amazon AWS or the Microsoft Azure cloud computing group, according to people who are in the Oracle cloud product development group. As a result, several said that they can learn from the missteps of these companies, and develop a more efficient solution.

Even with this 4th¬-place position, however, Oracle clearly sees an opportunity to differentiate from the others, based on its core database IP and a focus on high-performance applications. As was pointed out yesterday on the first day of Oracle’s OpenWorld event in San Francisco, there are quite a few companies that are interested in migrating traditional Oracle-based database-focused products onto the cloud. So far, the percentage of traditional Oracle-focused companies that have actually done this seems to be pretty small—perhaps high single digits as a percentage—but it’s clear that much of the hesitation is due to the more conservative approach to technology adoption that many Oracle customers have.

In other words, they may be slow, but that’s really just a reflection of where the real market is. Despite what you may read in the common tech press, the truth is, many companies are much slower at adopting these new technologies than the hype would have you believe, especially in the case of cloud or even AI adoption.

During CTO Larry Ellison’s keynote speech yesterday, he focused on both the performance and cost advantages of Oracle’s cloud-based solution, highlighting benchmarks against AWS that were almost too good to be true, with figures like 80x improvements in performance and 100% reduction in costs. Basically, he promised that anyone who could move data-intensive workloads with high performance requirements—admittedly only a portion of typical cloud workloads—to Oracle’s cloud would see a high-quality return on investment. In addition, he highlighted the company’s technology infrastructure offering, which he argued had both more flexibility and more capability than many of its competitors.

The key message in Ellison’s keynote speech, however, was on security-related issues in the cloud. In fact, he said that Oracle leveraged its late appearance in the market to build a newer, more agile and more secure means of creating a cloud platform. In what he termed Cloud Generation 2.0, he said the company built a separate ring of cloud control computers that lived outside of (and conceptually encircled) the standard cloud computing nodes. In other words, it is a network outside of the core cloud computing network that avoids the challenges of having to intermix the cloud control plane architecture with that of a company’s more proprietary data structure. By doing so, he claimed, the company could avoid the security risks of other challengers’ platforms, which co-mingle the cloud control software with customers’ own data and applications.

Conceptually, it’s an intriguing new architecture that looks at the interactions between customers’ data and the cloud providers’ software in very different ways. Plus, given the fact that these cloud control computers would not be based on x86 architecture (the company would not say exactly what they would use, but given Arm’s recent efforts in infrastructure, that seems a likely bet), they add a new twist to the security discussion. To be clear, this architecture opens up potential security and management challenges of its own, such as not knowing what’s happening within the enclosed circle of bare metal cloud computing nodes that have no deployment or management software running on them. However, it simultaneously creates a new type of interaction model between customer and data infrastructure provider that has additional potential security benefits.

Ultimately, it boils down to a question of trust. What companies do you trust, not only from a software perspective, but also from a security, data management, and comprehensive point of view. There isn’t an easy answer to these questions, but it’s clear that, with its new security-focused cloud architecture, Oracle is providing a very different perspective on how to think about cloud-based computing models.

Podcast: Huawei Product Launch, Tiny Smart Phones, Samsung Galaxy Book, Arm TechCon

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the Huawei product launch event in London, the release of several tiny smart phones, the debut of Samsung’s Qualcomm-powered Galaxy Book Always Connected PC, and the Arm TechCon conference.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Smartphone Innovation’s Geographical Shift

This week I have been in London for the launch of the Huawei Mate 20, Mate 20 Pro and Mate 20 X ( you can find a great review here ) and as I played with the devices and listened to Huawei dive into their silicon and artificial intelligence as well as watch an AR Panda deliver Kung Fu kicks on stage, I realized that these were yet more phones US consumers would not be able to buy.

Some American consumers keeping up with tech news over the past few months might have concluded they do not want to buy phones by Chinese brands as they believe that Chinese vendors would be installing spyware on their devices. While I am happy to be proven wrong, I do think much of the recent debate in the US over Chinese manufacturers has much more to do with politics than technology. I am sure many of you have much more exciting lives than mine but even so, are they really interesting enough for a foreign government to spy on you?

For the majority of buyers who follow tech and look at the success of Huawei internationally, this concern is not top of mind and they would prefer to be able to have access to their phones through a carrier than jump through hoops to get it at full price from a third party not even optimised for some US cellular bands. Google’s presence on stage with Huawei in London adding the latest Mate 20 and Mate 20 Pro to their Android Enterprise Recommended Program should help with some of the doubts people have.

More Limited Competition

So what if this political environment will result in US consumers having less choice? The US market has always been a very hard market to get into. American carriers have always been quite demanding and over the years we have seen brands that were strong in other markets, like Nokia and Sony, fail or struggle to grow in the US. Huawei had of course tried to enter the US market before, both under the Honor brand and the Huawei brand, but sales were limited. Right when the Chinese maker was close to signing with a leading carrier, last year,  the US government started talking about the danger posed by Chinese branded hardware.

One could argue, of course, that even if Huawei had a fair opportunity to get into the US market, they would find it as difficult as other brands have before them and they would fail. My argument here, though, is not about how successful Huawei would be, but rather how consumers would not have to miss out. Buyers would actually be the ones deciding Huawei’s success.

The Dynamic Nature of the European Mobile Market

In my days in London I was reminded about how much more dynamic the European mobile market is. This is due in part to the fact that it is of course made of many individual countries sharing many commonalities where vendors can enter and build their success story one country at the time. Huawei, like Samsung before them, saw its first success story in Italy and expanded from there finally reaching the UK too, a market that has been very much dominated by Samsung and Apple.

Carriers plans in Europe are more diversified than in the US with a prepay market that remains stronger than in the US and this drives different needs in terms of price points which in turn drives a more diverse supply mix. I also think that the mobile market has always been a bit more vibrant in Europe even before Asian manufacturers moved in. Back then it was about local players like Siemens in Germany or Sagem in France to name just two.

The US, on the other hand, has always been a more monolithic market where the gap between the top three or two brands and the rest of market players was way too big to be filled. Today some new names have entered the US market either online of through carriers but their volumes remain quite small. Some of these brands have their headquarters in China like OnePlus and TCL and one might wonder how long it might take for the political debate on China to impact them as well.

Getting back to the many faces of the Huawei Mate 20

What I think is interesting with the latest Huawei’s smartphones is that the company seems more comfortable talking about their advancements in silicon, particularly as it relates to artificial intelligence. Their capabilities have grown from  two years ago at CES where I accused them of AI washing. Their camera technology shows the ability the phone has to detect not just what you are taking a picture of, similar to what Samsung, LG and others can do, but also to pick which one of the three cameras will take the best shot for you and offer that solution to you. The LG V40 that sports three rear cameras as well does not offer this option despite selling under the ThinQ brand which denotes AI capabilities.

It was a shame that during the launch event CEO Richard Yu felt the need to compare Huawei’s innovation to that of the iPhone so many times. Even when you are comparing yourself to what is perceived to be the best in the market you are doing yourself a disservice by limiting yourself to be judge by the parameters set in place by the one you are comparing yourself to.

The smartphone camera remains a big purchase driver for consumers and talking about AI in relation to the camera is easier as a “show and tell,” it is also compelling and maybe more importantly it is less scary for consumers than drawing attention to AI being able to text a reply for you or spot a robot call like we recently heard from Google on the Pixel 3.

Design, which has always been a strong point for Huawei, continues to serve them well as they try new colors and finishes and even iconic camera designs on the back. Huawei also started to deliver technology firsts like under screen fingerprint on a world-wide product or the reverse changing functionality both available in the Mate 20 Pro.

Huawei’s weakest link remains software in my opinion. And this is not because I would prefer that their phones shipped with vanilla Android rather than the Huawei EMUI, but because with some features they are more focused on delivering features for the sake of it than actually delivering value. Think for instance at the ability they displayed on stage to change any light in a given photo into a heart shaped light. Calling that AI belittles what AI really does in a product like the Mate 20 Pro.

I also believe that Huawei is well aware of this weaker software play and they lack confidence in taking ownership of the things they do well. Adding features because the market demands them even when other vendors offer them does not mean copying your competitors. Yet, if your delivery mimics those vendors then you do run the risk to be labelled as someone that copies competition. Samsung went through the same self discovery process and they are closer than they have ever been to define who they are and how they want to drive success going forward.

Delivery is also important when you want developers to take advantage of your features both hardware and silicone ones. Showing what looks like an app like the 3D calorie calculator when you are in reality only showing a concept of what is possible is disingenuous and does not help grow confidence in your brand. It was a real shame that Huawei did not take the opportunity that this week’s launch offered to tell its story and bring the audience on a journey of growth, not in market share,but in capabilities and ambitions.

The complexity of the Chinese market will continue to drive Chinese makers to move fast. For some players that might mean to be a fast follower but I feel there is more and more originality coming from vendors like Huawei, Xiaomi and OnePlus. Some players might be limited on what they can do internationally due to IP, some might be limited due to a lack of understanding of international consumers but it is a shame to think that protectionism might prevent some innovation to get to US buyers.

 

 

Arm and Intel Partner to Ease IoT Challenges

There’s nothing like a common enemy to bring together companies that otherwise find themselves in competition with one another. In the still untamed world of the Internet of Things (IoT), however, it doesn’t require another company to trigger that kind of reaction, just a  set of real-world challenges: complexity, confusion, and overwhelming choice.

After bold proclamations from an enormous range of sources about the nearly limitless opportunity that IoT was supposed to represent, the cold hard reality of modest deployment numbers has created a dark cloud over the tech industry. The promised land of billions or even a trillion connected devices that IoT was supposed to enable seems as distant as ever and some analysts are starting to walk back their overenthusiastic early proclamations.

That’s likely why the two leaders in major chip architectures for IoT (and virtually all!) devices—Arm and Intel—have come together to help break down some of the barriers that have held the industry back. As both companies clearly recognize, the opportunity enabled by IoT is still very real, but it turns out it’s a lot harder to achieve than many first anticipated. As with so many issues in the tech industry, much of the problem has to do with scale. Setting up a few connected sensors to measure important data and then drawing insights from that sounds pretty straightforward, and, in many early pilot tests, results were very encouraging. Nearly everyone involved in the industry, in fact, can point to a few great case studies of where IoT deployments have made a very positive impact.

Taking those principles into large, widespread deployments, however, has proven to be very difficult. From the enormous diversity of IoT software platforms and ecosystems, to an even wider range of device types (and chip architectures powering them), through the massive set of potential security challenges, many companies eager to leverage IoT have slowed or even halted their implementation plans.

One of the biggest challenges, it turns out, is actually one of the first things that has to be done: connecting the devices to the software tools that will collect the data they generate. While that may sound simple, it turns out there are quite a few steps to take and issues to consider when talking about quickly and securely connecting hundreds of millions of devices that will be built by tens of thousands of different companies.

Specifically, you have to consider the provisioning of a device—which is the process of setting it up to properly connect to a network—and ensuring that it’s connected to the right place, communicating securely, and updated with the latest device firmware, a set of tasks typically referred to as onboarding. In addition, you need the flexibility to connect those devices to any variety of different cloud platforms (especially in the multi-cloud era that many companies now find themselves in), and you need to ensure that the device hasn’t been tampered with at any point during its manufacturing, distribution, installation, or operation.

Again, multiply all those concerns by the billions and it’s easy to see that even the smallest delay or the slightest oversight in any step of the process could be very problematic. Acutely aware of these concerns, both Arm and Intel have introduced a variety of technologies to address them over the years. Notably, Intel’s provisioning system called Secure Device Onboard (SDO), leverages the company’s hardware root-of-trust based EPID (Enhanced Privacy ID) technology to cleverly mask the real identity of a device, while simultaneously assuring a connected network that the device is who it says it is and that it can be connected automatically to a trusted network. The net result is the ability to securely bring IoT devices online without any human interaction, also called zero touch, which is a huge timesaver.

In addition, Intel SDO features a capability called late binding that allows a device’s security credentials and intended network target to be added at any point in the supply chain. This allows device manufacturers, or companies who are piecing together IoT devices from a variety of different suppliers, to cost-effectively mass produce items that can be customized for specific applications or environments at a later date. This allows IoT device makers the ability to save costs, but not give up the critical security customizations necessary to avoid huge problems like the Mirai Botnet security attack that has hit many insecure IoT devices over the last few years.

Arm, for its part, recently unveiled their Pelion IoT Device Management service, which also has a zero touch provisioning feature (for Arm IP-based devices, such as those with chips featuring Cortex-M or Cortex-A processor cores), and provides device management capabilities. Like the Intel solution, Arm’s Pelion service is ultimately based on a hardware root of trust that’s built into the core design of Arm-licensed processors and microcontrollers so commonly used in IoT devices.

By getting these two solutions to work together, the companies are helping overcome a number of factors that individually they weren’t able to achieve. First, of course, is the fact that the solutions have now been extended to cover both x86-based Intel powered IoT devices and Arm core-licensed IoT devices—essentially just about every device imaginable. As a result, now both types of devices can be provisioned with zero human touch, both types of devices can take advantage of Intel’s late binding technology, and both types of devices can be managed through Arm’s Pelion Device Management service.

The ultimate result is that these steps should make it significantly easier for companies who may deploy a wide variety of different IoT devices, built with different components from different manufacturers, to quickly and securely connect to the right places in a consistent manner. This can not only significantly reduce potential friction points in large-scale deployments, but makes it much easier to manage, update, and work with them as a collective group. Plus, by making it more cost-effective (and more secure) for device manufacturers, it should lower the costs of large IoT installations, reducing yet another potential barrier to adoption.

The march to billions of IoT devices and the amazing ecosystem that it should enable is still bound to be a long one, but by combining the best capabilities of their respective IoT solutions, Arm and Intel are taking a major step together in clearing the road of potential distractions.

Podcast: 5G Americas, Quibi And Snap Short Videos, Google Product Launch

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing the 5G Americas event and the expected impact of 5G, the launch of short-form videos from Quibi and Snap, and analyzing the announcement’s from Google’s product launch event.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Top Goals and Challenges for AI in Business

The technology may have a very futuristic feel to it, but in current implementations, it’s clear now that artificial intelligence (AI) and machine learning (ML) have very practical real-world applications that aren’t nearly as scary as some may fear.

Thanks to a freshly completed study by TECHnalysis Research on the usage of AI applications in US businesses based on a survey of 504 IT professionals working in medium (100-999 employees) and large (1,000+ employees) companies that are currently doing some type of AI work, the perspective that appears is pragmatic. (To read more on the study, you can also check out two previous columns on the subject: “Survey: Real World AI Deployments Still Limited” and “AI Application Usage Evolving Rapidly”.)

Companies want to use AI-based applications to improve their overall efficiency across a number of different areas. The hope is that AI-based tools can reduce some of the more tedious, repetitive tasks that can slow organizations down—or that some simply choose not to do because the tasks are so challenging to maintain.

In addition, companies of various sizes and types believe that AI-based applications can speed up or help automate many of these tasks. Whether it’s automatically filtering spam and phishing attacks from email; analyzing files or other data as they’re opened, created, or passed along; or acting as a perimeter shield on networks and examining the packets that travel across them, the realistic benefits of many AI-based or AI-enhanced applications are proving to be attractive to nearly 1-in-5 US companies with at least 100 employees.

The chart in Figure 1 highlights the top goals of AI projects and deployments that these companies have already embarked on.


Fig. 1

In addition to efficiency and automation, many organizations see AI as a great tool for increasing security across many different aspects of their organization, including data, networks, and the devices their employees use.

For a number of companies, AI is also seen as a next-generation big data analysis tool, destined to deliver on the promise of big data that many companies disappointingly discovered wasn’t as easy to mine for insights as they were led to believe. The idea is that AI algorithms can take over some of the data grunt work necessary to uncover useful information and leverage larger amounts of raw data in more efficient ways. It’s still early days here, but it’s clear that many companies are eager to see these kinds of results.

Some organizations are also hoping to generate cost savings from AI-based tools, but it’s clear that this is still a lesser priority for many organizations, particularly because of the costs that are often involved with AI-based applications and projects.

Speaking of which, cost concerns were one of the top three challenges that companies are currently facing with their AI efforts, as the chart in Figure 2 illustrates.


Fig. 2

The biggest challenges, however, had to do with complexity—both of the technology itself as well as the means to implement it. This isn’t terribly surprising because the level of real understanding about AI is still quite low. Despite the fact that AI and machine learning have been around in some form or other for decades, most people are just starting to learn about them. Plus, the hidden “black box”-type means by which many AI algorithms work makes it difficult for anyone but dedicated specialists to completely get their heads around the technology and understand how to make it do what you need (or want) it to.

On top of all this, there are still many lingering fears about the potential influence of the technology. Despite the relatively low rankings on impact on headcount impact (see chart), for example, it was clear from the verbatim comments that survey respondents put into an open-ended question on the overall importance, value, and impact of AI, that the fear of layoffs triggered by AI loomed large. As one respondent wrote, “…there is a real fear of putting people out of jobs. You hate to be left behind when helpful technology is out there, but it’s hard to eliminate a job that’s been there for 20 years.”

At the same time, there was also a great deal of excitement and promise expressed in response to the same open-ended question. For example, “AI is bound to impact every single industry, and, in our organization, AI can help deliver better search results and deliverables for some specific business cases. Finding patterns and reducing inefficiency is super important for our organization and [both of these] will benefit from the advent of new AI solutions. We strongly support AI-based solutions and prefer to adopt them quickly and gain a business edge.”

Practically speaking, there was also the recognition that AI is here now, and its impact is going to be critical. As one respondent summarized, “I think that AI will change how a lot of us do business. The change will be good. Getting over the initial hurdles of integration is the hardest part. The data provided by AI integrations will be invaluable and come much faster than was possible before.”

(You can download a copy of the TECHnalysis Research AI in the Enterprise Study Highlights for free. A complete version of the full report, with 178 slides that go into extremely fine detail on all the major question asked in the survey, is available for purchase.)

Podcast: HP PC Event, Microsoft Surface Event, BlackBerry Security Summit, SuperMicro China Server Story

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing several New York City-based industry events from the last week, including HP’s launch of their Spectre Folio convertible PC, the Microsoft Surface, Windows 10 and Office 365 updates, and the BlackBerry Security Summit, as well as analyzing the Bloomberg story on secret chips that the Chinese government supposedly put onto SuperMicro servers that ended up inside Apple, Amazon and 30 other companies.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

News that Caught My Eye: Week of October 5th, 2018

Surface Headphones

This week in NYC, Microsoft announced the Surface Pro 6, the Surface Laptop 2 (both of which with a black variant), the Surface Studio 2 and the Surface Headphones. It is the Headphones I want to concentrate on because it has been the most misunderstood product in my opinion.

Via Microsoft

  • I have seen many criticizing Microsoft for entering this space due to:
    • The dominance of well-established brands like Bose
    • The narrow margin nature of the business
    • The declining sales as consumers shift to wireless earbuds à la AirPods
  • While the above three points are facts and should deter any new player from entering the headphones business, I really do not think they affect Microsoft.
  • Surface Headphones are a Surface companion. This means they are first and foremost targeting Surface users who have bought into the brand.
  • Surface has established itself as the Windows alternative to Apple Mac/MacBooks. This means that it can command a premium on its products. That said, I would expect to see offers that bundle the Surface Headphones in with a Surface.
  • The Surface Headphones are also a vehicle to increase engagement with Cortana and Skype.
  • Some believe that Microsoft should abandon Cortana and fully embrace Alexa but such a view dismisses the role that the digital assistant plays in the broader analytics game. Microsoft must continue to find ways to increase engagement with Cortana and these new Headphones are just one way.
  • Microsoft started with a design that highly complements a PC usage, especially for the kind of audience Surface, as a brand, is targeting. Over time I could see Surface expanding its portfolio possibly looking at earbuds with added sensors that could be used for health/fitness. Although the Microsoft Band was killed, Microsoft learned a lot about fitness and health and the tech they had does not have to be limited to a fitness band to be useful.
  • Over time I would also expect to see more colors. For now, despite some criticism, Microsoft stayed true to the first Surface, something the addressable market will appreciate.

LG V40 and Its 5 Cameras

LG’s new $900-and-up V40 ThinQ is different, however. In addition to a better standard camera than its predecessors, the V30 and the G7 ThinQthe V40 has two additional rear cameras, which provide different perspectives and fields of view. In total, the V40 has five different cameras: three on the back, and two on the front, which give its camera system a level of versatility that other phones don’t offer.

Via The Verge

  • Huawei started the camera “mine has more than yours” race and now LG is getting ahead with 5 cameras on a single phone.
  • Smartphone vendors are struggling to differentiate and for those who do not control the OS experience, the camera, which is one of the top features driving purchase, is the most natural place to focus on to drive differentiation.
  • I like LG’s approach, although I have not tried the phone first hand, because they did not just add two cameras and replicate what Huawei did, which is using the cameras to improve the quality of the picture by adding more detail. LG is using the three cameras to deliver three separate experiences a bit like when you carried a digital camera and carried different lenses for it.
  • The LG V40 has a standard camera for normal shots, a super wide-angle camera for capturing a wider field of view, and, in a first for LG, a telephoto camera to get closer to your subject.
  • From reading early reviews, however, the results are not as encouraging as one would have hoped and it seems that the reasons are to be found in the software and hardware choices.
  • With such a system in place and the big focus on AI, you would expect LG to have implemented an intelligent mode detection which would suggest which camera to use for the shot. LG already had something similar in previous phones where for instance the camera would suggest a “food” mode for those #cameraeatsfirst shots. Why not apply this on the V40 rather than relegating the cameras to become more of a gimmick than a real tool?
  • This is the sword of Damocles for many companies who can do hardware but still struggle with software and more importantly who look at the top line differentiation and they fail to deliver because cost control stops them from implementing the right hardware.
  • Unless innovation really brings value to customers and not just cheap thrills, sales will see blips rather than a sustained growth driven by loyalty.

Twitter Is Losing its Battles against Fake News

Knight Foundation researchers examined millions of tweets and concluded that more than 80 percent of the accounts associated with the 2016 disinformation campaign are still posting — even after Twitter announced back in July that it had instituted a purge of fake accounts

 Via NPR 

  • Needless to say, this is bad news for Twitter and Jack Dorsey who had recently answered questions on Washington precisely on what the company is doing to minimize the potential impact of fake news on the mid-term elections.
  • The study found that more than 60% of Twitter accounts analyzed in the study showed evidence of automated activity. Basically, 60% of the accounts the study looked at were bots. Many of these accounts were also found to be following each other, which would suggest that they share a common source.
  • Twitter’s response to the study was that the data fails to take into account the actions Twitter takes to prevent automated and spam accounts from being viewed by people on the social network.
  • So basically Twitter is saying that the problem would be even bigger than the study shows if it were not for what the company has put in place to limit fake news.
  • While Twitter might think this is a good defense line I am not convinced it is. To me this points to a problem that might just be too big for Twitter, or other social media platforms for that matter, to solve.
  • I am afraid I do not have the answer on how we can win this battle, but I do think we sometimes forget that these platforms are being exploited and while it is their responsibility to protect themselves and their users we should also try and understand why this is happening and who is behind it.
  • Normally I would say that educating the public to spot fake news should be a focus of these brands as well while they try and eradicate the problem. Like you do with kids, you cannot take all the bad guys away but you can teach your kids to spot them and be prepared. Sadly I think that in this case, most of the public does not want to learn how to avoid fake news whether they are spread by bots, press, or politicians.

Apple Watch Series 4 to Drive Strong Upgrade Cycle

When I first saw the new Apple Watch presented at the Steve Jobs’ Theater I immediately said it would drive a strong upgrade cycle, and now we, at Creative Strategies, have brand new data from a study we conducted across 366 current Apple Watch owners in the US the week leading up to in store availability. The study was an international one that cut across several geographies touching a total of 557 consumers. For this article, I will focus on the US data only.

Our panelists were self-proclaimed early adopters of technology with 64% of them owning an iPhone X. Eighty-Four percent of the people who answered our online questionnaire were men, very much in line with the average composition of the early tech adopter profile.

Apple Watch Served its Base Well from the Get-go

Our panel owned a good mix of models: 41% has an Apple Watch Series 3 with Cellular, another 13% owns an Apple Watch Series 3 Wi-Fi only, and 15% has a Series 2. What was a surprise, considering how early tech this base is, was to see that 30% still owned an original Apple Watch.

One might argue that maybe the reason why these users are still on the original Apple Watch is that they are not very engaged with it. The data, however, says otherwise. While they are not as engaged as Apple Watch Series 3 owners they share their love for the same tasks: decline calls, check messages and check heart rate. The most significant gap with owners of more recent Apple Watch models is in the use of Apple Watch as a workout tracker. Here original Watch owners lag Watch Series 3 owners: 62% to 76%.

Satisfaction among original Apple Watch users is also strong with 93% of the users saying they were satisfied with the product. While 93% is a lower satisfaction number than Watch Series 3 with cellular at 99%, we need to be reminded that the original Apple Watch was introduced in 2014. Satisfaction at 93% for a four-year-old product is quite impressive.

When we reached out to a few panelists to ask why they did not feel compelled to upgrade so far, they mentioned that software updates and battery life kept them happy and that it would be a change in design and compelling features that will drive them to look at a new model. In other words, the original Apple Watch was still serving them well.

Strong Intention to Upgrade

Apple Watch Series 4 seems to hit both upgrade requirements for original Apple Watch owners as 76% say they plan to upgrade with 41% who have already pre-ordered while another 32% plan to do so in the next three months. When asked to select the most compelling new features that made them interested in upgrading and the faster processor was mentioned by 80% of the original Watch owners. This was followed by the bigger screen (75%) and the ECG (61%).

Apple Watch Series 3 owners are the same but with different priorities. The larger screen is the most important driver, followed by the faster processor and the ECG. The intention to upgrade is also more cautious with 29% saying they are planning to upgrade (54% already having preordered) with some users being concerned about using the old bands on the new model and some uncertainty on which size they would prefer.

Early Tech Users find Gifting Difficult

We have discussed before that early tech users seem to find gifting new tech hard and Apple Watch owners on our panel are precisely like that. When we asked if they were planning to buy the new Apple Watch Series 4 as a gift only 26 percent said they were. This is despite Apple Watch commanding a Net Promoter Score of 72 among panelists. Among the users who are planning on gifting Apple Watch, 51% will give one to their wife, and another 16% will give one to a parent. When asked which features are motivating the purchase for someone else, four stood way above everything else: larger screen (49%), ECG (45%), and faster performance and fall detection (both at 39%).

Among those intending to gift, 22% already preordered and 48% plan to buy within the next three months.

The Apple Watch User Base is Deep into the Ecosystem

Probably the most fascinating finding of this study is to see how entrenched in the ecosystem Apple Watch users are. While many could see Apple Watch as an accessory, I firmly believe that users who are looking at it as an essential tool to manage their day and their ecosystem of devices and services are the ones who get the most return on investment. Not surprisingly, multi device ownership across the panel is quite high: 88% owned an Apple TV, 75% owned Air Pods, 71% owned a MacBook Pro, 67% owned an iPad Pro, 44% owned an HomePod.

Early tech users are a window into the future, which is why it is so valuable to study them. While the time to turn from early adopters to mainstream users might vary, I think this ownership data best illustrate what Apple is working on when it comes to its user base. I have been saying for years that Apple cares more about selling more products to the same users than just expanding its overall market share in one area. As Apple moves more into services, it will be the combination of products that are present in a household that will drive engagement and loyalty and build an audience for life.