Proposed Nvidia Purchase and CXL Standard Point to Data Center Evolution

In case there were any remaining doubts about the future of computing in the data center, those doubts were removed yesterday thanks to two major announcements highlighting the role of connectivity across computing elements. In a word, it’s heterogeneous.

The biggest news was the surprise announcement from Nvidia about its proposed $6.9 billion purchase of Israel-based Mellanox, a company best known for making high-performance networking and interconnect chips and cards for servers and other data center components. In a deal that’s largely seen as being complementary and financially accretive to Nvidia’s revenues, the GPU maker is looking to add Mellanox’s strength and position in providing networking elements into high-performance computing systems to its growing data center offerings.

Specifically, Mellanox is known as being a critical enabler for what’s often called east-to-west networking across servers within data centers. The explosive demand for this kind of capability is being driven, in large part, because of the growth of Docker-style containers, which are a critical enabling technology for advanced workloads such as AI training and advanced analytics. Many of these advanced applications spread their computing requirements across different servers through the use of containers, which enable larger applications to be split up into smaller software components. These applications often require large chunks of data to be shared across multiple servers and processed simultaneously via high-speed network connections. That’s exactly the Mellanox technology that Nvidia wants to be able to leverage for future data center business growth. (In fact, Nvidia uses Mellanox components in its DGX line of general-purpose GPU(GPGPU) powered servers and workstations.)

Nvidia’s interest is also being driven by the nature of how computing in the data center is evolving. The underlying principle is that advanced workloads like AI need to be architected in new ways, especially as Moore’s Law advancements have slowed the speed increases that were previously available to new generations of chips. In particular, in order to maintain performance advancements and provide the kind of computing power necessary for these workloads, these workloads are going to have to split across multiple chips, multiple chip architectures, and multiple servers. In other words, they need a heterogeneous computing environment.

Those same principles are also what drove the other data center-related announcement yesterday from Intel and a number of other major data center players including Dell EMC, HPE, Microsoft, Cisco, Google, Facebook, Huawei, and Alibaba. Specifically, they announced the creation of the Compute Express Link (CXL) Consortium and the 1.0 release of the CXL specification for high-speed interconnect between CPUs and accelerator chips. Leveraging the physical and electrical interconnect capabilities of the upcoming PCI 5.0 spec, CXL consists of a protocol that allows for a cache coherent, shared memory architecture that permits the shuttling of data between CPUs and various type of other chips, including TPUs, GPUs, FPGAs, and other types of AI accelerators. Again, the idea is that advanced data center workloads are becoming increasingly diverse and will require new types of chip architectures and computing models to achieve better performance over time.

At a basic level, the difference between the two announcements is that the Mellanox/Nvidia technology operates at a higher level between devices, whereas the CXL protocol works at the chip level within devices. In theory, the two could work together, with CXL-enabled servers communicating with each other over high-speed network links.

Though it’s early, the CXL announcement looks like it could have an important impact on the evolution of data center computing. But to be clear, a number of challenges still remain. For one, CXL already faces a competitor in the CCIX standard (which Mellanox happens to be part of), and Nvidia offers its own NVLink standard for fast GPU-to-GPU connections. In addition, AMD’s Infinity Fabric seems to offer similar capabilities. If other CPU vendors like AMD and Arm (and its licensees) sign onto the CXL standard, however, that would clearly have a big impact on its adoption, so this will be interesting to watch.
On the potential Nvidia Mellanox merger, the one big question (other than the necessary geographic approvals and the potential geopolitical implications) is whether or not the purchase could drive Intel and other big players in the data center space to work more with other networking suppliers. Only time will tell on that front.

What’s also interesting about both announcements is that they clearly highlight the evolution of data center computing away from simply adding more, faster x86-based CPU cores to individual servers to a much more complex mesh of computing pieces connected together across the data center. It’s heterogeneous computing coming to life from two different perspectives and yet they both clearly point to an important evolution of how computing is starting to be done in data centers around the world.

Podcast: Facebook Manifesto, Warren Tech Company Breakup, USB4 and WebAuthn

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing Mark Zuckerberg’s latest post on the what he sees as the future of social networking, discussing the plan from Democratic presidential candidate Elizabeth Warren to break up big tech companies like Facebook, Amazon and Google, and chatting about the implications of the newly announced USB4 connectivity spec and the WebAuthn protocol for standardizing password-less authentication across the web.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Tech Standards Still Making Slow but Steady Progress with USB4 and WebAuthn

Accessory cables and passwords may not seem to have much in common, but the reality is they’re both a regular part of our day-to-day tech existence. We use them both, all the time. Yesterday, the two were linked together in a completely different way, thanks to important announcements about each of the technologies making the news.

First, the USB Promoter Group announced the release of the USB4 specification, the latest iteration of this critical connectivity standard. Building on the existing 3.2 version of the spec, USB4 integrates complete support for Thunderbolt 3.0 over USB-C connectors, including transfer speeds of up to 40 Gbps (twice as fast as USB 3.2). In other words, USB4 will bring all the benefits of Thunderbolt 3 to any device that enables the new USB standard, but at no additional cost. This important new capability was enabled by Intel—who invented and owns Thunderbolt 3 technology—offering a license-free royalty for Thunderbolt protocols to the USB Implementer’s Forum. The end result is that, by sometime in 2020, we should see much broader adoption of the technology across a wider range of PCs and peripherals.

Second, the World Wide Web Consortium (W3C), which manages and coordinates web standards, announced the ratification of the WebAuthn (short for Web Authentication) protocol, a new browser and website standard for password-less authentication. Driven by the work of the FIDO Alliance, WebAuthn provides a standardized way for various types of authentication mechanisms, including USB-based hardware keys as well as biometric technologies such as face scanners, fingerprint readers, etc., to pass along credentials securely between devices and websites. There had been some other proprietary methods for doing authentication online, but the official release of the final WebAuthn spec means that we should start to see more websites offer password-free login options soon—all of which will be more secure than the clearly broken password-driven mechanisms of today.

To fully take advantage of WebAuthn, websites will have to build-in support for it—it won’t automatically rid us of passwords—but now that a W3C standard is in place, that process should move more quickly. Thankfully, most modern browsers, including Chrome, Edge, Mozilla, and Safari already include support for WebAuthn, so now it’s just up to the sites themselves to make the effort.

In the case of USB4, we should finally have what we were originally promised with Thunderbolt 3—one connection standard to rule them all—but in a much more egalitarian fashion. No one ever complained about the technical capabilities of Thunderbolt 3, but many PC and peripheral vendors weren’t eager to pay Intel for Thunderbolt controller chips (or IP royalties for the technology), so adoption of the connectivity standard was more limited than many, including Intel, hoped. By making the technology available for free and incorporating it into their next generation Ice Lake CPUs and platforms, the company is enabling much broader support, and we’ll finally start to see a much wider array of devices and accessories—including displays, external GPUs, storage, docks, and more—using it. Unfortunately, labelling and verification of USB4 could still be a bit tricky because of the open-ended nature of the USB standard, but good news is that existing Thunderbolt 3-verified cables and devices should work interchangeably with USB4.

As with many standards-related announcements, we won’t see the benefits of either of these developments overnight. Realistically, it will be mid-2020 before we start to see widespread deployments of USB4 in devices and peripherals, and large numbers of websites that leverage the security enhancements of WebAuthn. Still, what these announcements highlight, and help us to remember, is that critical tech standards continue to evolve and continue to make important progress towards improving the usability and security of working with tech devices. The process may not be pretty, but the results clearly are. Here’s to a faster, easier, and more secure 2020.

Podcast: MWC 2019

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the big news from Barcelona with discussion around the transition to 5G, the launch of multiple 5G phones, the potential impact of foldables, and the debut of Microsoft’s HoloLens 2.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Second Gen HoloLens Provides Insights into Edge Computing Models

After shocking the tech world a bit over four years ago with the debut of the first HoloLens, Microsoft announced its second generation device at a special press event in Barcelona, just before the start of MWC.

The HoloLens 2 is a Qualcomm Snapdragon 850-powered augmented reality (AR) headset running a version of Windows on Arm that offers a number of significant improvements over the first-generation product. Expected to be available later in the year for a price of $3,500, HoloLens 2 features a greater than 2x increase in the field of view that you can see through the display, and what the company claims is a 3x increase in comfort. While the latter point is certainly hard to quantify, I can report through my own experience with it that the device is lighter, easier to setup and adjust, and offers a number of ergonomic improvements that should make it easier and more comfortable to wear for longer periods of time.

The display improvements in HoloLens 2 are the most notable versus the v1 headset. The company was careful to maintain the 47 pixels per degree resolution from the original, allowing you to easily read text inside the display—something that isn’t possible on many other AR and VR headsets. The overall size of the image you can see in the second-generation device, however, is significantly larger than it is in the first edition. According to Microsoft, the change is equivalent to moving from a 720p image per eye to a 2K image per eye. In real-world usage, it’s significantly more compelling and makes it possible to use the device for a wider range of applications.

Another critical hardware enhancement in HoloLens 2 is better hand and object tracking, which allows interactions with the holograms that the display generates to be much easier. Touching, moving, and manipulating holograms is more intuitive with the new headset and the interplay between real-world objects and digitally-generated elements provides a more compelling overall experience.

In addition to the hardware enhancements, Microsoft made a number of important software and standards-related announcements to go along with HoloLens 2. As expected, Microsoft is working with a wider range of partners for business-focused applications for the HoloLens 2—which is where the device is still targeted—in industries like manufacturing, medical, field services, and much more. Microsoft also announced several of its own software tools for the device. Microsoft Dynamics Guides 365 offers companies the ability to easily create training materials that can run on HoloLens 2, while Dynamics 365 Remote Assist lets you view content originally designed for HoloLens on a ARCore-equipped Android mobile devices and ARKit-equipped iOS devices. This work to bridge across standards and platforms is going to be very important over time, so it’s great to see Microsoft working towards breaking down barriers across different AR platforms.

One top of these applications, Microsoft also showed a set of Azure-based cloud services that work along with HoloLens. One of the most compelling is a remote rendering service that lets you generate 3D images of a certain resolution using only the hardware built into the headset, but then allows you to leverage cloud-based computing resources equipped with higher-power GPUs to render a much more detailed version, and then send it down to the display. For applications, where fast, real-time interactions with the model aren’t required, this essentially gives you extremely high-resolution holograms on the HoloLens 2. While there will need to be some software work done to develop these kinds of distributed, heterogenous computing applications, the end result is very compelling.

All told, the collective capabilities of these new Microsoft software tools and services point to a more comprehensive perspective that ties together AR, cloud computing, edge computing and distributed computing in a very interesting way. While the industry has talked about all these different phenomena and the potential connections between them, most of the discussions have been theoretical in nature. These Azure HoloLens services take those theories and make them real, providing a fascinating perspective both on how edge computing applications can leverage different combinations of computing resources and treat them as a single entity, as well as offering some insight into where heterogenous computing applications are headed.

In fact, while some were questioning why Microsoft would choose to launch HoloLens 2 at a mobile industry show, the connectivity story that underlies the HoloLens 2 link to Azure and other cloud services is exactly why the setting was appropriate in my mind. Moving forward, we’re going to continue to see the use of distributed computing resources of various types leveraging mobile and other wireless networks in order to create compelling and meaningful applications. The example that Microsoft provided with HoloLens 2 and Azure offers a small glimpse into that future. (On a side note, it also arguably highlights that Microsoft should have included an option for an integrated cellular modem for HoloLens 2 to be able to offer connections in environments where WiFi isn’t readily available—but their new hardware partner program for HoloLens 2 may alleviate that concern.)

The market for AR, VR and mixed reality devices in business continues to provide promise, but real growth has still been modest. The new capabilities in HoloLens 2 can’t single-handedly fix these issues, but it’s clear that it’s taking an important step in the right direction.

Podcast: Samsung Unpacked 2019, Galaxy Fold

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing all the announcements from Samsung’s Galaxy Unpacked event in San Francisco, including their new foldable smartphone, the S10 line of smartphones, their Galaxy Buds earbuds, and their new wearables.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

IBM’s Watson Anywhere Highlights Reality of a Multi-Cloud World

Ever since the rise to prominence of cloud computing, we’ve seen businesses grapple with how to best think about and leverage this new means of computing. Some companies, particularly web-focused ones, dove in head-first and now have their entire existence dependent on services like Amazon’s AWS (Amazon Web Services), Microsoft’s Azure, and Google’s Cloud Platform (GCP). For most traditional businesses, however, the process of moving towards the cloud hasn’t been nearly as clear, nor as easy. Because of large investments in their own physical data centers, thousands of legacy applications, and many other customized software investments that weren’t originally designed with the cloud in mind, the transition to cloud computing has been much slower.

One of the stumbling blocks in moving to the cloud for these traditional vendors is that the shift has often required a monolithic change to an entirely new, distinct type of computing. Needless to say, that’s not easy to do, particularly if the option you’re moving to is seen as a singular choice, with few alternatives. In particular, because AWS was so dominant in the early days of cloud computing, many organizations were afraid of getting locked into this new environment.

As alternative cloud computing offerings from Microsoft, Google, IBM, Oracle, SAP and others started to kick in, however, companies began to see that many different viable alternatives were available. What’s been happening in the cloud computing world over the last 12-18 months is more than just a simple increase in competitive options. It’s a significant expansion in thinking about how to approach computing in the cloud. With multi-cloud, for example, companies are now embracing, rather than rejecting, the concept of having different types of workloads hosted by different vendors.

In a way, we’re seeing cloud computing evolve in a similar path to overall computing trends, but at a much faster pace. The initial AWS offerings, for example, weren’t that conceptually different from mainframe-based efforts, focused around a platform controlled by a single vendor. The combination of new offerings from other vendors as well as different types of supported workloads could be seen as a theoretical equivalent to more heterogenous computing models. The move to containers and microservices across multiple cloud computing providers in some ways mirrors the client-server evolution stage of computing. Finally, the recent development of “serverless” models for cloud computing could be considered roughly analogous to the advancements in edge computing.

In this context, the announcements that IBM made at last week’s Think 2019 conference around their Watson AI services are well timed to meet the evolving cloud computing demands. Specifically, the company said that through their Watson Anywhere initiative they were going to be making Watson AI services available across AWS, Azure, and GCP, in addition to their own IBM Cloud offerings. In addition, for situations where companies may want to develop and or run AI-based applications in private clouds or their own data centers, the company is licensing Watson to be able to run locally.

Building on the company’s Cloud Private for Data as a base platform, IBM is offering a choice of Watson APIs or direct access to the Watson Assistant across all the previously mentioned cloud platforms, as well as systems running Red Hat OpenShift or Open Stack across a variety of different environments.

This gives companies the flexibility they are now expecting to access these services across a range of cloud computing offerings. Basically, companies can get the AI computing resources they need, regardless of the type of cloud computing efforts they’ve chosen to make. Whether it’s adding cognitive services capabilities to an existing legacy application that’s been lifted and shifted to the cloud, or architecting an entirely new microservices-based service leveraging cloud-native platforms and protocols, the range of flexibility being offered to companies looking to move more of their efforts to the cloud are growing dramatically.

Vendors who want to address these needs will have to adopt this more flexible type of thinking and adapt or develop services that match not only the reality of the multi-cloud world, but the range of choices that these new alternatives are starting to enable. The implications of multi-cloud are significantly larger, however, than just having a choice of vendors, or choosing to host certain workloads with one vendor and other workloads with another. Multi-cloud is really enabling companies to think about cloud computing in a more flexible, approachable way. It’s exactly the kind of development the industry needs to take cloud computing into the mainstream.

Extending Digital Personas Across Devices

Sometimes major technology trends happen without us really noticing. That’s certainly the case with a concept that’s been called portable digital identities or digital personas. Essentially, these terms refer to the practice of accessing a consistent set of data and services across a variety of devices. It’s something we all do—especially with the growing range of different computing devices and cloud-based services we now all have access to—but it isn’t something that most people consciously set out to achieve. It just happens without us even really thinking about it.

At a basic level, for example, we all now expect to have a synchronized email inbox across all our devices. But that wasn’t always the case. Though it may be easy to forget, there was a time when even if you read an email on one device, it didn’t show up as read in the email application on a different device, or a response you wrote on one device didn’t automatically appear on other devices you might have used.

We’ve moved well beyond that basic level of email organization now, of course. We have synchronized services for everything from a list of the movies and TV shows we’ve watched on a service like Netflix across smart TVs, PCs, and smartphones, to all the recent rides we’ve taken with companies like Lyft, to a consistent list of browser favorites across operating systems, browsers, and devices. Individually, these are all nice features to have, and they make using the services or applications that support them much easier. Collectively, however, they start to paint the bigger picture of a consistent digital identity that we are each building, without a conscious effort on our part.

Once you recognize this portable digital identity concept and start to think about what the implications of this development are, you quickly realize that there are a lot of very interesting new possibilities, particularly in terms of how digital personas and shared computing experiences can be further enhanced. One of the first, and most obvious, extensions is to our actual identity—linking all our various accounts and services to who we actually are. While that may sound odd, remember that in this era of account spoofing, take-overs, and other hacking efforts, it’s not always clear that any particular account is owned by you. If, however, you could develop methods that more clearly tie you with your digital persona—such as through biometric authentication, which organizations like the FIDO Alliance are working on, or linking it to the unique SIM card (or eSIM) in your mobile phone, as startup Averon has started to do—then your physical and digital identity could start to be linked in a more definitive way.

Cloud-based storage services such as OneDrive, iCloud, Google Drive, Box, and OneBox are also critical enablers for this device independent world of digital personas. While cloud-based backup is certainly an extremely important feature, the real beauty of these services is the ability to let you access all your data easily across all your devices. This, in turn, lets you jump across different devices, depending on what happens to be the best choice for a given situation (or what you happen to have access to at a given time). The tech industry hasn’t completely resolved all the situations, or updated all the applications necessary to enable completely seamless data sharing across everything, but tremendous progress is being made all the time.

Another intriguing opportunity to expand on our digital identity is to extend our computing experiences across the range of physical devices we own. Take, for example, the possibility of linking and or extending the screen of one device onto another. Recently, for example, we’ve started to see applications like Microsoft’s Your Phone app, or the more complete Dell Mobile Connect, provide you with the ability to view and control content from your mobile phone on your PC. The idea is to link the physical experience of using a particular device with your other devices, so that they start to intentionally function as a single, larger system. This is only possible with a consistent set of data services, but once it’s there, the possibilities for leveraging it become very intriguing.

The next step I’m hoping to see across the device sharing spectrum is the ability to use multiple devices together in an even more cooperative way. Imagine, for example, the ability to essentially “throw” the content from your phone screen onto a tablet or other nearby larger screen device with a simple gesture, or even automatically. In business meetings and conference room environments, we’re starting to see some of these capabilities now, but more work has to be done to make it easier for consumers. Ironically, I think the appearance of foldable phones with larger screens may actually help spur this on, because, while the initial high prices of these devices may limit their sales, the appeal of quickly seeing content on your phone on a much bigger display is going to be universal. As a result, I expect to see more efforts that can create a larger-screen phone experience from a non-foldable phone going mainstream shortly after the launch of foldables.

Similarly, on the connectivity side, it’s going to be critical for multiple devices to share consistent, high-speed data connections. Again, we’ve started to see some efforts here from the carriers to let you add devices to an existing master data plan, but I think we’re going to need to see person-based plans that automatically connect all the devices an individual owns without having to worry about managing them individually. The expected proliferation of LTE and 5G-equipped devices will likely make this easier, but more work needs to be done to create a single connection persona that works across all our devices.

Of course, an interesting implication of all these developments is that the previously critical distinctions of different platforms, operating systems, and even applications start to become significantly less important. While I’ve said this many times before, it bears repeating that in the era of digital personas, it’s all about your data—not the software or even devices you happen to be using.

In the commercial world, products that take advantage of this device and platform independent approach also start to take on more importance over time. Products like Citrix’ Workspace, for example, have been built to create a digital environment that allows you to get access to your information and data, regardless of what device you happen to be using. Specific enterprise applications are still central to computing in the business world, so the ability to transparently virtualize applications built for one platform and let them run on another becomes a critical part of being able to function in the modern business environment. By adding in the ability to run tasks via microapps that require some amount of work across multiple applications (such as filling out and then filing expense reports), as Citrix has done with their Sapho acquisition, the company is taking the platform (and even application) independent concept even further.

While names like portable digital identities or digital personas may be a bit vague, there’s nothing unclear about the impact that these concepts are making on our computing environment and the advancements now occurring in the tech industry. The best technology advancements serve the needs that people have. Given the explosion of different devices and platforms, it’s never been more clear that people are hungry for capabilities that let them get access to their data and services in the easiest and most compelling way possible across the range of devices they now own. The time for expanded, multi-device digital identities is here.

Podcast: Apple, 5G Vs. WiFi, Digital Privacy, Spotify

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing some of this week’s Apple news, debating the future of connectivity with embedded LTE or 5G vs. WiFi, discussing some news and ideas around digital privacy, and chatting about some recent developments with Spotify.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Could Embedded 5G/LTE Kill WiFi?

With less than three weeks to go before the big Mobile World Congress (MWC) trade show in Barcelona, Spain, there’s a lot of attention being paid to wireless technologies, particularly 5G. The next generation cellular network is expected to make a particularly big splash this year, as the first devices that incorporate the technology are expected to be on display. In addition, many are expecting to see a rash of telecom infrastructure equipment suppliers unveiling the latest components necessary to power 5G networks, telecom carriers announcing their pricing and planned rollouts of 5G services, and just about everyone else trying to make some kind of connection between what they’re offering and the new network standard.

But 5G won’t be the only wireless technology making some important debuts in Barcelona. At long last, we should also see the first client devices that incorporate the latest version of WiFi: 802.11ax, more recently dubbed WiFi 6. According to FCC documents discovered by the DroidLife website, for example, it appears Samsung’s next generation Galaxy smartphones, predicted to be announced at their upcoming pre-MWC event, will include support for the faster new WiFi standard. In addition, there are rumors of many more WiFi 6-equipped smartphones and other gadgets being introduced at this year’s MWC. Many of these new devices are expected to be powered by the recently unveiled Qualcomm Snapdragon 855 chipset, which includes built-in support for WiFi 6.

Interestingly, even though 5G and WiFi 6 are different technologies, there are a surprising number of similarities between them, at many different levels. In fact, there’s enough of them that some have wondered if one of the wireless network standards might eventually subsume or replace the other.

First, at a high level, each of the new standards contains a wireless data connection protocol that builds on previous generations and is specifically designed to increase the density of wireless networks. One of the biggest problems limiting the performance of both cellular broadband and local area wireless networks is clogged airwaves—too many people and too many devices trying to leverage a limited amount of space. It’s a classic data traffic jam.

As a result, both 5G (and many of the enhancements introduced with gigabit LTE—which AT&T is misleadingly labeling 5Ge) and WiFi 6 are using some of the same basic technical principles to help alleviate the congestion. Though there are differences in implementation between 5G and WiFi 6, both are using technologies like enhanced Multi-User MIMO (MU-MIMO), OFDMA, advanced QAM, and beam-forming to make more efficient use of the defined radio spectrums for each technology. Taken together, these enhancements should help each technology reach theoretical peak transfer rates in the high single digit gigabits per second range as well.

Just to add to the complexity, there are also a number of efforts, such as LAA (License Assisted Access) and MulteFire, which are designed to allow cellular radio signals to travel over the same, unlicensed 5GHz radio spectrum used by WiFi. Concerns have been raised that this combination of cellular and WiFi could lead to interference with WiFi operation, however. In addition, telco carriers, who exclusively license the radio spectrum they use for broadband cellular connections, have voiced concerns about losing access to what could become “private” LTE or even 5G networks.

Despite these issues, a number of wireless vendors, including Qualcomm and Intel, have discussed the potential opportunity for companies to build these kinds of private cellular networks—in essence, replacing WiFi with 5G or LTE. Though this would require all devices connecting to the network to have an integrated cellular modem—no small feat right now—the idea is that this could improve coverage across a campus environment or in a factory, and wouldn’t require any type of log-in process that you typically need with WiFi.

At the same time, some similar benefits of integrated LTE (and eventually 5G) in PCs and other devices is also starting to take hold. The ease of having a single network connection that doesn’t have to be changed or be remembered or be logged into (if you even can) no matter where you are becomes much more appealing the more you get used to the concept. Throw in the critical fact that cellular connections are considered more secure than WiFi, and you can understand why you should expect to see a lot more devices with integrated cellular broadband over the next few years.

Of course, as appealing as a single network may sound, there are still a number of critical issues that exist. First, telco carriers still have a long way to go to make this a financially attractive and realistically practical option. Yes, the “add a device for $10 more a month” model does exist for most people, but it’s not consistently available, a number of limitations often apply, and it’s still not easy to manage multiple devices on a single account. This is particularly true for companies that have thousands of employees, each of whom could easily have 4-5 different connected devices.

In addition, there are a number of attractive, lower-cost and relatively easy alternatives. Right now, WiFi signals are nearly as ubiquitous as cellular connections, and in most cases, they’re free, which is always tough to compete with. Plus, some of the enhancements for WiFi 6 will likely fix the frustrations that people often have with WiFi in dense environments (and which typically trigger the switch to a cellular broadband connection—such as at a trade show, etc.). On top of that, tethering a WiFi enabled device to a cellularly connected one is getting much easier, particularly now that Google just announced that the automatic tethering features of some ChromeOS devices are coming to most all Chromebooks and a wide range of popular Android-based smartphones.

Ultimately, 5G and WiFi 6 aren’t really competitive technologies, but complementary ones—at least for now. In fact, it will probably be difficult to find a new 5G device that doesn’t also support WiFi 6—the two technologies can work hand in hand. For most people, 5G will handle the wide-area wireless connection, and WiFi 6 will handle the local wireless connection. Eventually, however, there could certainly come a time when only one of them will be necessary.

It may seem crazy to think that WiFi could go away, especially given how pervasive it is today. But if you fully take into account the advances that 5G is expected to bring—not least of which is a huge number of small cells that can be used indoors and other places where WiFi has typically reigned—the idea may not be as far-fetched as it first appears.

Podcast: AMD, Microsoft, Apple Earnings, FaceTime Bug, Apple-Facebook-Google Spat

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the recent earnings announcements from AMD, Microsoft and Apple, as well as discussing the implications of the group FaceTime bug Apple disclosed the week, and debating the impact of the recent spats between Apple and Facebook and Apple and Google over enterprise application certifications.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Successful IT Projects More Dependent on Culture Than Technology

Just as many people adopt resolutions at the beginning of the year in an effort to enable better versions of themselves, so too, do many business organizations. Strategic new year plans and building roadmaps for the all-important “digital transformation” that so many companies seem to be obsessed with these days are all common activities in the first few weeks of a calendar year. (Whether there’s really much to the whole digital transformation concept is a whole other question—but one we’ll save for another day.)

Given the enormous range of new IT-focused technologies that companies are trying to integrate into their organizations, or are at least evaluating, these efforts are not insignificant tasks. From building out a multi-cloud or hybrid cloud strategy, to digital workspace driven desktop strategies, to enabling an AI-powered edge computing solution, there’s no shortage of incredibly impactful new technologies that companies are hoping to leverage in their efforts to improve themselves from an efficiency, cost, and productivity point of view.

Many technology industry vendors are, of course, focused on getting their messages out about the specific approaches they take to these and many, many other IT-related issues. Their primary goal, typically, is to describe how their solutions are better than those of their competitors, either from a functionality, cost, or ease-of-use perspective.

While these approaches are understandable, they’re actually missing what’s typically the most important factor in determining whether or not a particular project will be successful: the culture inside the organization. Specifically, for most big IT-related efforts to have their intended impact, they have to be embraced by C-level management and essentially pushed down into the organization. Without that executive level buyoff, many big tech deployments never achieve what they’re really capable of—regardless of how good the particular technology or product may be.

Of course, it’s understandable why many vendors are missing, or at least not concentrating enough attention on, this critical point: they don’t (and typically can’t) control it. Yes, big tech vendors and their deployment service arms (or partners) can work to make sure that their products are installed and functioning properly, but they can’t guarantee that people within their customer’s organization are actually going to use those products. Again, it takes a consistent message from top management to ensure that investments they make in new cutting-edge technologies are really brought into the day-to-day operations of their companies.

Too often, high-level execs will, at best, give only their financial blessing to a given IT technology investment, but then walk away without ensuring that it gets thoroughly integrated. Why? Because it’s often very hard to do and very time-consuming.

If and when a lack of commitment to full integration happens, and, subsequently, the projects don’t live up to initial expectations, the technology or product itself often takes the blame. To be sure, there are certainly numerous examples of products that really don’t deliver on what a vendor promises and that can’t meet a particular organization’s unique requirements. Again, however, in many situations the fault lies not with the product or vendor who sold it, but the customer organization. While it’s common to say that the customer is never wrong, if a customer doesn’t put the necessary measures in place to ensure that a given technology was incorporated into their business processes in the way it was intended to, then they are at fault. On the other hand, products that don’t really have all the capabilities that organizations need can still succeed if they are adopted and integrated with the right kind of approach.

So, how do vendors deal with this issue? Frankly, some of it has to do with better pre-qualifications of prospective clients. Sales reps need to spend more time getting to really know and appreciate the IT and management culture of their customers and prospects. Vendors should develop simple culture tests that help them better understand whether or not the potential customer organization is equipped to really deploy their technology in the way it was intended. Inevitably, the process will lead to tremendous reductions in frustrations, as well as time and money spent (or wasted, as the case may be) for both vendors and their customers. Plus, in an era when many vendors are focused on strengthening their brands and what they stand for, avoiding situations where customers are sold the wrong (or ineffective) product can go a long way towards maintaining good will with prospective customers.

Admittedly, a lot of these ideas are much easier said than done, and there are often extenuating circumstances that can dramatically complicate the simple examples I’ve provided. However, as we start to move into some critical new approaches to computing, including AI-based efforts, extensions to cloud computing, and more, it’s essential for companies selling these advanced solutions to spend a bit more upfront time with their customers to make sure they are putting the right kind of solution into the right kind of environment. It may sound easy, but it rarely ever is.

XR Gaming Market Remains Challenging

For years, people touted virtual reality (VR), augmented reality (AR), mixed reality (MR), or the combined XR (extended reality) as one of the big growth opportunities for the tech market. Even now, in spite of some relatively well-known names shutting down and expectations for the market getting radically muted, there are still some holding out hope that AR/VR (or whatever form of alphabet soup you prefer) will be the “next big thing.”

A key part of the expectations for the AR/VR market was that consumers would take to the technology and, specifically, that consumer gaming would be a big hit. New data from TECHnalysis Research suggests otherwise, however, as the challenges for mixed reality gaming look to be much more difficult than many initially thought.

Based on a recently completed online survey of 2,030 gamers (defined for this study as people who said they gamed on either a PC, smartphone, tablet, or other device for at least 2 hours a week) roughly split between the US (1,017) and China (1,013), current AR and VR gaming experiences only appeal to a fraction of the overall market, particularly in the US.

First, gaming survey participants were asked to select the types of devices they own and use for gaming from a list of 17 categories, including different PC form factor and operating systems, different smartphone and gaming console platforms, and others, including AR/VR headsets. Not surprisingly, ownership and usage of AR/VR headsets was quite low in the US even among gamers—in fact, it was the smallest category of ownership at 4.2%. This is, in part, due to the fact that the AR/VR headsets category is a relatively new one compared to the others, but it also reflects the limited interest many US consumers have shown towards these devices.

Thankfully, the story was better in China, where 13% of gaming respondents said they owned a headset of some type. Even there, however, ownership levels were only ahead of Chromebooks, Nintendo gaming consoles (which used to be banned in China), Chrome-based desktops, iMacs, and Macbooks. Still, it’s clear that there’s generally more acceptance and interest in AR and VR in China than there is in the US.

In addition to low ownership, the average time spent gaming on an AR/VR headset as a percentage of overall gaming time in a typical week was less than 1% in the US and only 2.3% in China. Obviously, platforms like Android phones, Windows 10 PCs, iPhones, and others offer a much wider variety of gaming options than AR/VR headsets. As a result, these other devices consume a large percentage of the time people spend gaming. The result is that even among a dedicated gaming crowd, AR/VR gaming simply isn’t attracting the kind of attention and focus that it needs to stand out as a breakthrough application. To be fair, there are many other types of AR/VR applications available that people could be more interested in, but if strong gaming support isn’t there for the category, that could be a significant long-term challenge.

Unfortunately, the story gets a bit worse when you look at the overall experience and outlook that gamers have with AR and VR gaming, particularly in the US. Respondents were asked to describe their level of experience/satisfaction for different subcategories of AR/VR gaming—PC-based VR, smartphone-driven VR, smartphone-driven AR, and gaming-console-powered VR. As Figure 1 illustrates, there’s still a challenge in getting people to try these experiences, with an average of just under 38% of US-based gamer respondents saying that they had yet to try each of these new gaming experiences.


Fig. 1

What’s even more surprising (and potentially troublesome), however, is that in every single category, more people who had tried each of the experiences said they didn’t enjoy them than those who did. Specifically, respondents were asked if they owned a device in the subcategory and if they enjoyed the experience or not, or if they had simply tried a device in the subcategory and whether they subsequently liked it or not. The fact that the sum of the dislikes (don’t enjoy) outnumbered the likes (enjoy) clearly suggests that the industry has a long, long way to go to really have a big, positive impact among US gamers. It also highlights the fact that the technology is still very immature and clearly needs to significantly improve before it can start appealing even to a dedicated audience that’s highly predisposed to want to like it.

Again, thankfully, the story was definitely much more positive in China, as illustrated in Figure 2.


Fig. 2

The number of people who hadn’t tried the various subcategories of AR/VR was 5% lower than the US at just under 33%. But even more importantly, the exact opposite comparison exists between like and dislike numbers for those who had tried these devices in China. Every single category shows that more people enjoyed the experience than those who didn’t enjoy it, with a particularly large positive gap in both PC-based VR gaming and console-based VR gaming. Even in China, there was about ¼ of the gaming respondents who didn’t care for AR/VR gaming, but this is certainly the type of outlook that companies who are trying to target the AR/VR gaming market would like to see. Exactly why the numbers for AR/VR are so much better in China isn’t entirely clear, but it’s likely due in part to the wider range of AR/VR gaming titles available in China, as well as the generally more enthusiastic gaming community that exists there.

The overall gaming market is growing quite strongly around the world, as renewed enthusiasm for PC gaming, as well as huge growth in interest for eSports, game streaming content channels like Twitch, gaming competitions, and big budget games like Fortnite all clearly illustrate. (Future columns will highlight more specifics from the survey to support all these points.) For AR/VR, however, the gaming momentum and seemingly natural tie-in to the category hasn’t had the kind of positive impact on device sales that many expected it would.

Certainly, there are other opportunities for different types of AR/VR applications, and right now, there does seem to be some enthusiasm in the enterprise/business segment for AR/VR. While that’s certainly good news, most of these applications are small, niche types of projects that can be difficult to turn into large, scalable businesses. Without consumer gaming, the short and mid-term prospects for AR and VR products is going to be very tough, and without dramatic improvements in quality and large reductions in price, even the long-term prospects could be difficult to sustain.

Podcast: Netflix, Voice Assistants and Smart Home, Motorola Razr

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the recent news and earnings announcements from Netflix and what it means for streaming entertainment content trends, discussing the growth in voice assistant-equipped devices and what the implications of the trend could mean for smart homes, and commenting on rumors of a revived Motorola Razr flip phone with a foldable screen.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Voice Assistant War: What If Nobody Wins?

One of the clearest developments that came out of 2018, and prominently on display last week in Las Vegas at CES 2019, was the rise of the embedded voice assistant. Amazon’s Alexa and Google’s Assistant were omnipresent at the show, thanks to their extremely wide range of partners and, in Google’s case, their enormous outdoor booth. Samsung’s Bixby also made a much stronger showing in a wide array of Samsung products, and was even touted as the company’s home for AI-based developments. Combine that with recent numbers from Amazon and Google about compatible device shipments, and it’s clear that voice assistants have gone mainstream.

But as large and important as the market for voice assistant-capable devices may be, there are still a great deal of uncertainties about where they could be going. At a basic level, of course, is the evolving market share battle between the leading players. Amazon is generally seen as winning the war so far, but Google has been coming on strong, plus Apple, Microsoft, and Samsung are too big to ignore, especially given how young this market still is.

The dynamics of the voice assistant are now becoming much more complex. At this year’s CES, for example, we started to see even more devices that are supporting multiple voice assistant platforms. On the one hand, this makes sense, because it’s not at all clear who, or even if, there will ever be a true voice platform winner. By allowing people to select from multiple voice assistant options, vendors are giving their customers more flexibility. But there are some potential downsides to this approach as well, because including support for multiple platforms inevitably adds more complexity and development costs (and, therefore, potential price additions) to most devices. Plus, when you scratch beneath the surface, you can find instances where switching between voice assistant platforms could even impact functionality because of varied capabilities (or lack thereof) across different platforms.

Lenovo’s clever new Smart Tab devices are an interesting example of this potential conundrum. Both the $200 M10 and $300 P10 are 10” Android tablets running the Oreo version of the OS—meaning they have the Google Assistant feature built in—but when they are put into their bundled smart dock, they default to Amazon smart display “Show Mode,” similar to the Echo Show. The Google Assistant features work when the tablets are undocked, but won’t work in the dock, because Lenovo optimized the Smart Tabs’ visual capabilities to work only with Amazon’s platform. Now, there’s certainly nothing wrong with that choice, and for the record, the Smart Tabs look to be a very attractive alternative to existing, non-mobile smart displays, but it does highlight some of the tradeoffs that vendors supporting multiple voice assistants have to make.

An even more confounding problem that consumers are likely to start facing this year is owning multiple products with different smart assistant platforms. With embedded voice assistants being built into everything from toilets to showers to home appliances—thanks to impressive (and inexpensive) semiconductor solutions from companies like Qualcomm, Intel, and others that make them easy to integrate—it’s soon going to be hard to buy home devices that don’t have some kind of voice capabilities. On tech focused devices, it makes sense to consider the particular voice platform(s) supported as a key purchase criterion. For other non-tech devices, however, the voice assistants will simply become a feature that may be a nice to have and not a make-or-break factor. The end result is that people are likely to end up with multiple different voice assistants—and that could get messy very quickly, especially if they are turned on by default (as they are likely to be).

While some people may be perfectly comfortable working across multiple voice assistants, and actually remembering which ones are enabled on which devices, most people are likely to get quickly confused in such a scenario. In fact, it could be frustrating enough that people stop using the voice assistant capabilities entirely. Of course, more tech-savvy consumers could just pick a particular platform and only enable it on the devices that support it to simplify things. Even then, however, there can be challenges if multiple devices try to respond simultaneously to a given request. Most of the major platforms are working to address the multiple device response issue as we speak, but it remains to be seen how effective it will be as more and more voice assistant-enabled devices start to make their way into people’s homes.

As we’ve seen in plenty of other device platform battles, it’s very difficult to get people to stick with a single ecosystem. For example, while Apple may tout some impressive capabilities across its range of iOS, MacOS, watchOS, and TVOS devices, there are very few people who only own Apple devices. Similarly, even though a significant percentage of US homes may own at least one Android-based device with Google Assistant support or one Alexa-based device, it’s not at all clear that means they’re only going to stick with that platform. We live in a very heterogenous device world and that heterogeneity is likely to spread over to the world of voice assistants as well, with implications that could prove to be challenging moving ahead.

There’s no doubt that voice assistant platforms are an important and increasingly impressive new technology, but let’s hope they don’t become victims of their collective success.

Podcast: CES 2019

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the big news announcements coming out of this year’s CES trade show in Las Vegas, including developments in 8K TVs, gaming PCs, 5G, autonomous cars, robotics and more.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Big CES Announcements are TVs and PCs

What’s old is new again, or so it seems at this year’s CES trade show in Las Vegas.

After years of gadgets, drones and other gizmos grabbing the headlines, a lot of the big announcements from this year’s show are coming from some of the oldest categories: TVs and PCs. To be sure, there are plenty of interesting (and odd—smart toilets, anyone?) products on display at this year’s CES, but there are a surprisingly large number of news announcements and important speeches from many TV and PC vendors.

Admittedly, most of the news on the TV and PC side is more evolutionary and not really breakthrough, but frankly, that’s the state of the consumer electronics industry overall these days. In addition, as with most categories, there was a great deal of discussion about the integration of AI into these “traditional” devices, demonstrating how the concept of artificial intelligence really is reaching across the entire tech industry.

The news cycle for TVs kicked off on Sunday with the surprise announcement that Apple and Samsung were working together to integrate iTunes, HomeKit and AirPlay2 (and likely its forthcoming video streaming service) directly into new Samsung TVs. It was followed the next day by somewhat similar announcements from LG, Vizio and Sony for their new TVs, with the exception of iTunes, which will remain a Samsung exclusive through this spring. Essentially, this development means that Apple is offering a software-based solution for providing access to their services and removes the need for consumers to buy an AppleTV box. (Interestingly, it also means Apple is building a Tizen—and likely soon an Android—version of iTunes.) In exchange, Apple gets access to a wide range of current smart TV customers for their video services. It’s a clear example of Apple’s evolution towards a services focus and, thankfully, highlights the company’s willingness to bring those services to devices other than those with an Apple logo.

As expected, there was a lot of announcements surrounding 8K TVs, but, of course, little content to show for it. (Heck, it’s still hard to find much 4K content—but the new Apple integration with smart TVs will include support for 4K, so that should help.) A wide range of vendors offered an enormous range of sizes for 8K, all of which promised sophisticated upscaling to work with existing 4K content. Thankfully, rather than positioning 8K as a replacement for 4K, Sony only put out 8K TVs that are 85” and larger—bigger than most all of their 4K TVs. So, for them, it’s really just an extension to their screen sizes that also happens to be 8K ready. Unfortunately, most of the other TV vendors weren’t quite as clear on their positioning of 8K vs. 4K.

Finally, though the company had already previewed it last year, LG announced that they would be shipping their roll-up OLED-based TV as a real product this year. No pricing was announced, but the 65” foldable 4K display-based device is expected to be available towards the middle or end of this year.

On the PC side, Dell, HP, Lenovo, and Samsung all unveiled their latest laptops and desktops, with a range of offerings that covers everything from low-end Chromebooks, through mainstream consumer and business-focused PCs, up through high-end gaming machines. In fact, there was a particularly strong focus on the gaming side at CES, with some impressive new offerings from several players.

Dell took the opportunity to present their new Alienware Legend design (and latest CPU and GPU offerings) for their Alienware line of products. Long an iconic, though aspirational, gaming PC brand, the latest Alienware offerings include the new Area51m, the first upgradeable notebook to use desktop CPUs (up to Intel i9) and GPUs (up to Nvidia RTX 2080). For more mainstream gamers, Dell also announced updates to their G line of notebooks (the G15 and G17), both of which offer 8th Gen Intel CPUs and Nvidia discrete GPUs. On the premium consumer side, Dell finally (!) fixed the “nosecam” in their otherwise excellent XPS13 by putting an extremely small new video camera above the screen, yet still kept the extremely small bezels.

From HP, there were several new gaming-focused devices, including the first release of a Nvidia GSync format BFGD (Big Format Graphic Display), the 65”, 1440p, $4,995 OMEN X Emperium 65 first demoed last year, as well as updates to its Omen line of gaming notebooks (Omen 15) and desktops (Omen Obelisk). HP also showed one of the first AMD-powered Chromebooks, the HP Chromebook 14 (Acer announced one as well), as well as a more upscale convertible Chromebook dubbed the Chromebook x360 14 G1. For business users, HP showed a new take on an OLED screen-based notebook (the Spectre x360 15) that promises to offer better battery life than first-gen OLED notebooks.

Lenovo showed several new additions to its Legend line of gaming-focused PCs, as well as a slew of gaming-focused accessories. On the traditional PC side, Lenovo touted some interesting AI features in its latest Yoga machines, such as the Yoga S940, including the ability to filter out ambient audio during online conference calls, and the ability to shift content to connected displays based on where you are looking, amongst others. In addition to PC updates, Lenovo continued to grow its offerings in the smart home market with the Lenovo Smart Clock, a $79 smart alarm clock with a display—somewhat similar to a mini version of their Smart Home display. Speaking of which, the company also debuted two tablet versions of their smart display, called Lenovo Smart Tabs, that let you carry the screen element around your house like a typical tablet, but then dock into a base station that offers integrated speakers and other features consistent with the standalone Smart Display. With models at $199 and $299, it’s a clever variation on the connected display category that could appeal to consumers looking for more multifunction devices.

Samsung introduced its first ever gaming laptop, the Samsung Odyssey, a further reflection of the growing interest in PC gaming. In addition, they offered new designs for their convertible Notebook 9 Pro and Pen series, the latter of which leverages the same design (and even same color) as the pen from their Note 9 series smartphones. It’s the first time Samsung has shown this level of integration between their different product lines and, hopefully, is a portent of more to come.

There was also big news that’s been announced (and some still to come) from big PC component players. AMD, in addition to securing a highly coveted CES keynote spot for CEO Lisa Su (fresh off a year where the company impressively achieved the status as the best performing stock on the S&P 500), unveiled their second-generation Ryzen mobile CPUs. Expected to be used in both ultrathin and gaming-focused devices (thanks to integrated Radeon GPU cores), the Ryzen mobile parts come in both 15W and 35W versions. AMD also debuted new 7th gen A Series CPUs, powering both the HP and Acer Chromebooks.

Intel, for its part, provided more details on 10 nm CPUs that the company plans to ship in systems by the holidays of this year. Codenamed Ice Lake, the SOC (system on chip) offers CPU architecture enhancements, significantly improved integrated Gen 11 graphics, as well as Thunderbolt 3, WiFi6, and DL Boost, for accelerating AI functions. One of the first systems to include it will be a future iteration of Dell’s XPS line. Intel also previewed the first real-world implementation of its Foveros 3D chip-stacking technology in a tiny motherboard platform codenamed Lakefield. Lakefield will incorporate the hybrid architecture SunnyCove CPU design (which features one Core CPU and four Atom CPU cores) the company first mentioned back in December at their analyst day, At their CES press conference, Intel showed more details about the SOC design, as well as a few examples of potential form factors leveraging the new platform.

Nvidia’s big news for the PC market focusing on gaming, with the debut of the $349 RTX 2060, which brings their ray-tracing capable Turing architecture technology down to mainstream price points. The company also discussed more about their GSync efforts (such as with the aforementioned HP 65” gaming display).

All told, it was an impressive display of admittedly incremental, yet still important advances in what continue to be the two largest categories in the consumer electronics world.

Top Tech Predictions for 2019

Though it’s a year shy of the big decade marker, 2019 looks to be one of the most exciting and most important years for the tech industry in some time. Thanks to the upcoming launch of some critical new technologies, including 5G and foldable displays, as well as critical enhancements in on-device AI, personal robotics, and other exciting areas, there’s a palpable sense of expectation for the new year that we haven’t felt for a while.

Plus, 2018 ended up being a pretty tough year for several big tech companies, so there are also a lot of folks who want to shake the old year off and dive headfirst into an exciting future. With that spirit in mind, here’s my take on some of what I expect to be the biggest trends and most important developments in 2019.

Prediction 1: Foldable Phones Will Outsell 5G Phones
At this point, everyone knows that 2019 will see the “official” debut of two very exciting technological developments in the mobile world: foldable displays and smartphones equipped with 5G modems. Several vendors and carriers have already announced these devices, so now it’s just a question of when and how many.
Not everyone realizes, however, that the two technologies won’t necessarily come hand-in-hand this year: we will see 5G-enabled phones and we will see smartphones with foldable displays. As of yet, it’s not clear that we’ll see devices that incorporate both capabilities in calendar year 2019. Eventually, of course, we will, but the challenges in bringing each of these cutting-edge technologies to the mass market suggest that some devices will include one or the other. (To be clear, however, the vast majority of smartphones sold in 2019 will have neither an integrated 5G modem nor a foldable display—high prices for both technologies will limit their impact this year.)

In the near-term, I’m predicting that foldable display-based phones will be the winner over 5G-equipped phones, because the impact that these bendable screens will have on device usability and form factor are so compelling that I believe consumers will be willing to forgo the potential 5G speed boost. Plus, given concerns about pricing for 5G data plans, limited initial 5G coverage, and the confusing (and, frankly, misleading) claims being made by some US carriers about their “versions” of 5G, I believe consumers will limit their adoption of 5G until more of these issues become clear. Foldable phones on the other hand—while likely to be expensive—will offer a very clear value benefit that I believe consumers will find even more compelling.

Prediction 2: Game Streaming Services Go Mainstream
In a year when there’s going to be a great deal of attention placed on new entrants to the video streaming market (Apple, Disney, Time Warner, etc.), the surprise breakout winner in cloud-based entertainment in 2019 could actually be game streaming services, such as Microsoft’s Project xCloud (based on its Xbox gaming platform) and other possible entrants. The idea with game streaming is to enable people to play top-tier games across a wide range of both older and newer PCs, smartphones, and other devices. Given the tremendous growth in PC and mobile gaming, along with the rise in popularity of eSports, the consumer market is primed for a service (or two) that would allow gamers to play popular high-quality gaming titles across a wide range of different device types and platforms.

Of course, game streaming isn’t a new concept, and there have been several failed attempts in the past. The challenge is delivering a timely, engaging experience in the often-unpredictable world of cloud-driven connectivity. It’s an extraordinarily difficult technical task that requires lag-free responsiveness and high-quality visuals packaged together in an easy-to-use service that consumers would be willing to pay for.

Thankfully, a number of important technological advancements are coming together to make this now possible, including improvements in overall connectivity via WiFi (such as with WiFi6) and wide area cellular networks (and 5G should improve things even more). In addition, there’s been widespread adoption and optimization of GPUs in cloud-based servers. Most importantly, however, are software advancements that can enable technologies like split or collaborative rendering (where some work is done on the cloud and some on the local device), as well as AI-based predictions of actions that need to be taken or content that needs to be preloaded. Collectively, these and other related technologies seem poised to enable a compelling set of gaming services that could drive impressive levels of revenue for the companies that can successfully deploy them.

It’s also important to add that although strong growth in game streaming services that are less hardware dependent may imply a negative impact on gaming-specific PCs, GPUs and other game-focused hardware (because people would be able to use older, less powerful devices to run modern games); in fact, the opposite is likely to be true. Game streaming services will likely expose an even wider audience to the most compelling games and that, in turn, will likely inspire more people to purchase gaming-optimized PCs, smartphones, and other devices. The gaming service will give them the opportunity to play (or continue playing) those games in situations or locations where they don’t have access to their primary gaming devices.

Prediction 3: Multi-Cloud Becomes the Standard in Enterprise Computing
The early days of cloud computing in the enterprise featured prediction after prediction of a winner between public cloud vs. private cloud and even of specific cloud platforms within those environments. As we enter 2019, it’s becoming abundantly clear that all those arguments were wrong headed and that, in fact, everyone won and everyone lost at the same time. After all, which of those early prognosticators would have ever guessed that in 2018, Amazon would offer a version of Amazon Web Services (called AWS Outpost) that a company could run on Amazon-branded hardware in the company’s own data center/private cloud?

It turns out that, as with many modern technology developments, there’s no single cloud computing solution that works for everybody. Public, private, and hybrid combinations all have their place, and within each of those groups, different platform options all have a role. Yes, Amazon currently leads overall cloud computing, but depending on the type of workload or other requirements, Microsoft’s Azure, Google’s GCP (Google Cloud Platform), or IBM, Oracle, or SAP cloud offerings might all make sense.

The real winner is the cloud computing model, regardless of where or by whom it’s being hosted. Not only has cloud computing changed expectations about performance, reliability, and security, the DevOps software development environment it inspired and the container-focused application architecture it enabled have radically reshaped how software is written, updated, and deployed. That’s why you see companies shifting their focus away from the public infrastructure-based aspects of cloud computing and towards the flexible software environments it enables. This, in turn, is why companies have recognized that leveraging multiple cloud types and cloud vendors isn’t a weakness or disjointed strategy, but actually a strength that can be leveraged for future endeavors. With cloud platform vendors expected to work towards more interoperability (and transportability) of workloads across different platforms in 2019, it’s very clear that the multi-cloud world is here to stay.

Prediction 4: On-Device AI Will Start to Shift the Conversation About Data Privacy
One of the least understood aspects of using tech-based devices, mobile applications, and other cloud-based services is how much of our private, personal data is being shared in the process—often without our even knowing it. Over the past year, however, we’ve all started to become painfully aware of how big (and far-reaching) the problem of data privacy is. As a result, there’s been an enormous spotlight placed on data handling practices employed by tech companies.

At the same time, expectations about technology’s ability to personalize these apps and services to meet our specific interests, location, and context have also continued to grow. People want and expect technology to be “smarter” about them, because it makes the process of using these devices and services faster, more efficient, and more compelling.

The dilemma, of course, is that to enable this customization requires the use of and access to some level of personal data, usage patterns, etc. Up until now, that has typically meant that most any action you take or information you share has been uploaded to some type of cloud-based service, compiled and compared to data from other people, and then used to generate some kind of response that’s sent back down to you. In theory, this gives you the kind of customized and personalized experience you want, but at the cost of your data being shared with a whole host of different companies.

Starting in 2019, more of the data analysis work could start being done directly on devices, without the need to share all of it externally, thanks to the AI-based software and hardware capabilities becoming available on our personal devices. Specifically, the idea of doing on-device AI inferencing (and even some basic on-device training) is now becoming a practical reality thanks to work by semiconductor-related companies like Qualcomm, Arm, Intel, Apple, and many others.

What this means is that—if app and cloud service providers enable it (and that’s a big if)—you could start getting the same level of customization and personalization you’ve become accustomed to, but without having to share your data with the cloud. Of course, it isn’t likely that everyone on the web is going to start doing this all at once (if they do it at all), so inevitably some of your data will still be shared. However, if some of the biggest software and cloud service providers (think Facebook, Google, Twitter, Yelp, etc.) started to enable this, it could start to meaningfully address the legitimate data privacy concerns that have been raised over the last year or so.

Apple, to its credit, started talking about this concept several years back (remember differential privacy?) and already stores things like facial recognition scans and other personally identifiable information only on individuals’ devices. Over the next year, I expect to see many more hardware and component makers take this to the next level by talking not just about their on-device data security features, but also about how onboard AI can enhance privacy. Let’s hope that more software and cloud-service providers enable it as well.

Prediction 5: Tech Industry Regulation in the US Becomes Real
Regardless of whether major social media firms and tech companies enable these onboard AI capabilities or not, it’s clear to me that we’ve reached a point in the US social consciousness that tech companies managing all this personal data need to be regulated. While I’ll be the first to admit that the slow-moving government regulatory process is ill-matched to the rapidly evolving tech industry, that’s still not an excuse for not doing anything. As a result, in 2019, I believe the first government regulations of the tech industry will be put into place, specifically around data privacy and disclosure rules.

It’s clear from the backlash that companies like Facebook have been receiving that many consumers are very concerned with how much data has been collected not only about their online activities, but their location, and many other very specific (and very private) aspects of their lives. Despite the companies’ claims that we gave over most all of this information willingly (thanks to the confusingly worded and never read license agreements), common sense tells us that the vast majority of us did not understand or know how the data was being analyzed and used. Legislators from both parties recognize these concerns, and despite the highly polarized political climate, are likely going to easily agree to some kind of limitations on the type of data that’s collected, how it’s analyzed, and how it’s ultimately used.

Whether the US builds on Europe’s GDPR regulations, the privacy laws instated in California last year, or something entirely different remains to be seen, but now that the value and potential impact of personal data has been made clear, there’s no doubt we will see laws that control the valued commodity that it is.

Prediction 6: Personal Robotics Will Become an Important New Category
The idea of a “sociable” robot—one that people can have relatively natural interactions with—has been the lore of science fiction for decades. From Lost in Space to Star Wars to WallE and beyond, interactive robotic machines have been the stuff of our creative imagination for some time. In 2019, however, I believe we will start to see more practical implementations of personal robotics devices from a number of major tech vendors.

Amazon, for example, is widely rumored to be working on some type of personal assistant-based robot leveraging their Alexa voice-based digital assistant technology. Exactly what form and what sort of capabilities the device might take are unclear, but some type of mobile (as in, able to move, not small and lightweight!) visual smart display that also offers mechanical capabilities (lifting, carrying, sweeping, etc.) might make sense.

While a number of companies have tried and failed to bring personal robotics to the mainstream in the recent past, I believe a number of technologies and concepts are coming together to make the potential more viable this year. First, from a purely mechanical perspective, the scarily realistic capabilities now exhibited by companies like Boston Dynamics show how far the movement, motion, and environmental awareness capabilities have advanced in the robotics world. In addition, the increasingly conversational and empathetic AI capabilities now being brought to voice-based digital assistants, such as Alexa and Google Assistant, demonstrate how our exchanges with machines are becoming more natural. Finally, the appeal of products like Sony’s updated Aibo robotic dog also highlight the willingness that people are starting to show towards interacting with machines in new ways.

In addition, robotics-focused hardware and software development platforms, like Nvidia’s latest Jetson AGX Xavier board and Isaac software development kit, key advances in computer vision, as well as the growing ecosystem around the open source ROS (Robot Operating System) all underscore the growing body of work being done to enable both commercial and consumer applications of robots in 2019.

Prediction 7: Cloud-Based Services Will Make Operating Systems Irrelevant
People have been incorrectly predicting the death of operating systems and unique platforms for years (including me back in December of 2015), but this time it’s really (probably!) going to happen. All kidding aside, it’s becoming increasingly clear as we enter 2019 that cloud-based services are rendering the value of proprietary platforms much less relevant for our day-to-day use. Sure, the initial interface of a device and the means for getting access to applications and data are dependent on the unique vagaries of each tech vendor’s platform, but the real work (or real play) of what we do on our devices is becoming increasingly separated from the artificial world of operating system user interfaces.

In both the commercial and consumer realms, it’s now much easier to get access to what it is we want to do, regardless of the underlying platform. On the commercial side, the increasing power of desktop and application virtualization tools from the likes of Citrix and VMWare, as well as moves like Microsoft’s delivering Windows desktops from the cloud all demonstrate how much simpler it is to run critical business applications on virtually any device. Plus, the growth of private (on-premise), hybrid, and public cloud environments is driving the creation of platform-independent applications that rely on nothing more than a browser to function. Toss in Microsoft’s decision to leverage the open-source Chromium browser rendering engine for its next version of its Edge browser, and it’s clear we’re rapidly moving to a world in which the cloud finally and truly is the platform.

On the consumer side, the rapid growth of platform-independent streaming services is also promoting the disappearance (or at least sublimation) of proprietary operating systems. From Netflix to Spotify to even the game streaming services mentioned in Prediction 2, successful cloud-based services are building most all of their capabilities and intelligence into the cloud and relying less and less on OS-specific apps. In fact, it will be very interesting to see how open and platform agnostic Apple makes its new video streaming service. If they make it too focused on Apple OS-based device users only, they risk having a very small impact (even with their large and well-heeled installed base), particularly given the strength of the competition.

Crossover work and consumer products like Office 365 are also shedding any meaningful ties to specific operating systems and instead are focused on delivering a consistent experience across different operating systems, screen sizes, and device types.

The concept of abstraction goes well beyond the OS level. New software being developed to leverage the wide range of different AI-specific accelerators from vendors like Qualcomm, Intel, and Arm (AI cores in their case) is being written at a high-enough level to allow them to work across a very heterogeneous computing environment. While this might have a modest impact on full performance potential, the flexibility and broad support that this approach enables is well worth it. In fact, it’s generally true that the more heterogeneous the computing environment grows, the less important operating systems and proprietary platforms become. In 2019, it’s going to be a very heterogenous computing world, hence my belief that the time for this prediction has finally come.

Podcast: 2018 Year in Review

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the big news developments impacting the tech industry this year, including social media and data privacy concerns, price hits to the previously soaring FAANG stocks, developments in assisted and autonomous cars, challenges to AR and VR products, changes in the smartphone and PC businesses, the reinvigoration of the semiconductor market, and the impact of artificial intelligence.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Rejuvenated Intel Highlights Benefits of Competition

Competition really is a great thing, and if you ever really needed a reminder of how and why, look no further than the recently rejuvenated, albeit humbled, semiconductor behemoth based in Santa Clara, CA.

After an extremely difficult 2018 that featured both major ongoing delays in producing new 10 nm chips and the unceremonious exit of its CEO, Intel is also facing the toughest competitive environment it’s seen in some time. Not only is a resurgent AMD becoming a serious threat in both the PC and server markets, but Nvidia has managed to snag most of the focus in the attention-grabbing AI market, and new offerings from Qualcomm show that its computing capabilities are much stronger than many may have realized.

As a result of all these challenges, Intel has been forced to rethink a number of its previous investments, reorganize its increasingly scattered divisions, and put together a strategy that could both directly address the new competitive environment and leverage many of the unique capabilities that make Intel what it still is today (lest we forget): the largest semiconductor company in the world.

Thankfully, the company recently laid out its new vision through a series of announcements about new technology directions and strategy delivered at an industry analyst summit and tech press event. Specifically, on the technology side, the company discussed a new variation on the “chiplet” concept that leverages a new 3D chip-stacking technology codenamed Foveros. Instead of trying to continue along the traditional Moore’s Law path of increasing transistor density horizontally via large and complicated monolithic chips created on a single process technology node, Foveros technology represents an important pivot towards vertical density. Practically speaking, what this means is that the company can combine several different chips created at different process sizes, while still increasing overall transistor density, in a single chip package. It’s a fascinating development that highlights how Intel is still able to maintain its long history of manufacturing advances, despite the challenges it faced in bringing 10 nm chips to market.

The first real-world example of Foveros’ ability to integrate heterogenous pieces together is the newly announced Sunny Cove architecture, expected to ship in 2019, which will combine both 10 nm Core “big” CPUs with several “little” Atom CPUs, into a new hybrid x86 architecture (which, yes, sounds conceptually very similar to the “big.Little” architecture designs that Arm and its customers have been talking about for years). The idea is to enable much more power-friendly x86 designs—it will be interesting to see what kinds of devices this new platform will enable.

At the strategic level, the company highlighted a new approach built on six pillars—Process, Architecture, Memory, Interconnect, Security and Software—that manages to tie together a number of different resources that Intel owns into a nicely unified, and powerful, vision of the future of computing. The process advances are built not just on Foveros, but the simultaneous work its been doing for both 7nm and 5nm, both of which are expected to benefit from the hard-won lessons the company learned on 10nm. Throw in the announcements about plans for new fabs and it’s clear the company is focused on moving forward aggressively on the process front.

Architecturally, the company discussed both the wide range of different architectures it’s creating, including CPU (scalar), GPU (vector), AI (matrix) and FPGA (spatial), compute offerings, as well as advancements in each of those areas. Over the years, Intel has amassed an impressive collection of different companies and architectures, but this was the first time it provided a unified vision that tied all the pieces together. The event also saw the first release of a few more details on their upcoming dedicated GPU effort, currently codenamed Xe and scheduled for release in 2020.

On the memory side, the company highlighted its advances in Optane storage and memory products. Intel emphasized new types of memory that break down the barriers between traditional DRAM and storage, and enable the creation of more sophisticated and much faster overall computing system designs. These memory capabilities are a unique and often overlooked advantage Intel offers versus most all of its competition. Given the exploding amounts of data being created and processed, these memory technologies will be critically important for the increasingly large data sets that data center-based components are going to need to have. (The fact that a simpler form of 3D stacking process technology is also used to build many Optane parts certainly doesn’t hurt either.)

The need to provide better and faster connections between various elements is another key capability in building more sophisticated chip and system designs in an increasingly heterogenous computing world. True to form, Intel talked about a wide range of options it offers in this area as well. From 5G modems to silicon photonics to new ultra-high-speed serial connections between chiplet components in stacked 3D designs, Intel has a number of interconnect technologies that it can leverage in future components and devices.

Security, of course, is a key factor for any company today, and, though Intel has faced some big concerns around Spectre, Meltdown, and other related chip architecture flaws, the company recognizes the need to incorporate security capabilities into all of its offerings. In particular, Intel is investing to integrate a multi-prong security story that reaches across the chip level, SOC level, board level, and software level to ensure the safest possible devices.

Finally, one of the most audacious new goals for Intel is a new software strategy built around what they’re temporarily calling One API. The basic concept is to create a layer of software abstraction that would allow programmers to write at this higher level, then smartly take advantage of whatever hardware system capabilities are available in a given system, from hybrid chip architectures to unique memory offerings and more. In theory, this includes the ability to send certain bits of code to one chip type and other chunks to other types while still maintaining most of the raw performance that would be available if programmers wrote straight to the metal. It’s a goal that many people have talked about—and it still remains to be seen if Intel can execute on it—but it would certainly provide a key advantage to Intel in an increasingly heterogeneous computing world.

In addition to these important technology and strategy announcements, it was clear that there was a new attitude within Intel’s executive ranks. In addition to a humbler approach, the company openly talked about being a smaller player in a bigger market. Clearly, the goal was to re-emphasize the fact that the company is now seeing themselves being able to participate in a broader range of opportunities than it traditionally has. There was even a joking reference about bringing back former CEO Andy Grove’s desire for Intel to always be paranoid about the competition—a quality that, frankly, seemed to fade over the last few years.
At roughly 107,000 employees, Intel is a very large organization, and it can often be tough to turn big ships around. It’s clear, however, that there’s a fresh attitude and approach there that certainly makes them appear to be much better prepared for an increasingly diverse computing future. Now, if they could only fill that CEO job….

Podcast: Qualcomm Tech Summit, Intel Analyst Event, Nvidia Robotics

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing the recent Qualcomm Snapdragon Tech Summit event and its impact on 5G, smartphones and computing, discussing the importance of Intel’s recent Analyst Summit and strategy announcements, and talking about news from Nvidia on a new robotics platform update.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Microsoft Browser Shift Has Major Implications for Software and Devices

Sometimes it’s the news behind the news that’s really important. Such is the case with the recent announcement from Microsoft that they plan to start using the open source software-based Chromium project as a basis for future versions of their Edge browser. At a basic level, it’s an important (and surprising) move that seems significant for web developers and those who like to track web standards. For typical end users, though, it seems a bit ho-hum, as it basically involves under-the-hood changes that few people are likely to think much about or even notice.

However, the long-term implications of the move could lead to some profoundly important changes to the kinds of software we use, the types of devices we buy, the chips that power them, and much more.

The primary reason for this is that by adopting Chromium as the rendering engine for Edge, Microsoft should finally be able to unleash the full potential of the platform-independent, web-focused, HTML5-style software vision we were promised nearly a decade ago. If you’ll recall, initial assurances around HTML5 said that it was going to enable software that could run consistently within any compatible browser, regardless of the underlying operating system. For software developers, it would finally deliver on Java’s initial promise of “write once, run anywhere.” In other words, we could finally get to a world where everyone could get access to all the best software, regardless of the devices we use and own, and the ability to move our own data and services across these devices would become simple and seamless.

Alas, as with Java, the grandiose visions of what was meant to be, didn’t come to pass. Instead, HTML5-based applications struggled with performance and compatibility issues across platforms and devices. As a result, the potential nirvana of a seamless mesh of computing capabilities surrounding us never came to be, and we continue to struggle with getting everything we own to work together in a simple, straightforward way.

Of course, some might argue that they prefer the flexibility of choices and unique platform characteristics, despite the challenges of integrating across multiple platforms, application types, etc., and that’s certainly a legitimate point. However, even in the world of consistent software standards, there was never an intention to prevent choice or the ability to customize applications. For example, even though Chromium is also the web rendering engine for Google’s Chrome browser, Microsoft’s plan is to leverage some of the underlying standards and mechanisms in Chromium to create a better, more compatible version of Edge, but not build a clone of Chrome. That may sound subtle, but it’s actually an important point that will allow each of these companies (as well as others who leverage Chromium, such as Amazon) to continue to add their own secret sauce and provide special links to their own services and other offerings.

By moving the massive base of Windows users (as well as Edge browser users on the Mac, Android, and iOS, because Microsoft announced their intentions to build Chromium-powered browsers for all those platforms as well), the company has single-handedly shifted the balance of web and browser-based standards towards Chromium. This means that application developers can now concentrate more of their efforts on this standard and ensure that a wider range of applications will be available—and work in a consistent fashion—across multiple devices and platforms.

There are some concerns that this shifts too much power into the hands of a single standard and, some are worried, to Google itself, since it started the Chromium project. However, Chromium is not the same as Chrome (despite the similar name). It’s an open source-based project that anyone can use and add to. With Microsoft’s new support, they’ve ensured that their army of developers, as well as others who have supported the Microsoft ecosystem, will now support Chromium. This, in turn, will dramatically increase the number of developers working on Chromium and, therefore, improve its quality and capabilities (in theory, at least).

The real-world software implications for this could be profound, especially because Microsoft has promised to embed Chromium support into Windows. What this will do is allow web-based applications access to things like the file system, being able to work offline, touch support, and other core system functions that have previously prevented browser-based apps from truly competing against stand-alone apps. This concept, also known as progressive web apps (PWA), is seen as being critical in redefining how apps are created, distributed, and used.

For consumers, this means the need to worry about OS-specific mobile apps or desktop applications could go away. Developers would have the freedom to write applications that have all the capabilities of a stand-alone app, yet can be run through a browser and, most importantly, can run across virtually any device. Software choices should go up dramatically, and the ability to have multiple applications and services work together—even across platforms and devices—should be significantly easier as well.

For enterprise software developers, this should open the floodgates of cloud-based applications even further. It should also help companies move away from dependencies on legacy applications and early Internet Explorer-based custom enterprise applications. From traditional enterprise software vendors like SAP, Oracle, and IBM through modern cloud-based players like Salesforce, Slack, and Workday, the ability to focus more of their efforts on a single target platform should open up a wealth of innovation and reduce difficult cross-platform testing efforts.

But it’s not just the software world that’s going to be impacted by this decision. Semiconductors and the types of devices that we may start to use could be affected as well. For example, Microsoft is leveraging this shift to Chromium as part of an effort to bring broader software compatibility to Arm-based CPUs, particularly the Windows on Snapdragon offerings from Qualcomm, like the brand-new Snapdragon 8cx. By working on bringing the underlying compatibility of Chromium to Windows-focused Arm64 processors, Microsoft is going to make it significantly easier for software developers to create applications that run on these devices. This would remove the last significant hurdle that has kept these devices from reaching mainstream buyers in the consumer and enterprise world, and it could turn them into serious contenders versus traditional X86-based CPUs from Intel and AMD.

On the device side, this move also opens up the possibility for a wider variety of form factors and for more ambient computing types of services. By essentially enabling a single, consistent target platform that could leverage the essential input characteristics of desktop devices (mice and keyboards), mobile devices (touch), and voice-based interfaces, Microsoft is laying the groundwork for a potentially fascinating computing future. Imagine, for example, a foldable multi-screen device that offers something like a traditional Android front screen, then unfolds to a larger Windows (or Android)-based device that can leverage the exact same applications and data, but with subtle UI enhancements optimized for each environment. Or, think about a variety of different connected smart screens that allow you to easily jump from device to device but still leverage the same applications. The possibilities are endless.

Strategically, the move is a fascinating one for Microsoft. On one hand, it suggests a closer tie to Google, much like the built-in support for Android-based phones did in the latest version of Windows 10. However, it’s specifically being done through open source, and is likely to leverage its recent Github developer resource purchase to make web standards more open and less specifically tied to Google. At the same time, because Apple doesn’t currently support Chromium and is still focused on keeping its developers (and end users) more tightly tied into its proprietary OS, Microsoft is essentially further isolating Apple from key web software standards. In an olive branch move to Apple users, however, Microsoft has said that they will bring the Chromium-powered version of Edge to MacOS and likely iOS, essentially giving Apple users access to this new world of software, but via a Microsoft connection.

In the end, a large number of pieces have to come together in order to make this web-based, platform-free version of the software world come to pass, and it wouldn’t be the least bit surprising to see roadblocks arise along the way. Still, Microsoft’s move to support Chromium could prove to be a watershed moment that quietly, but importantly, sets some key future technology trends into motion.

The Connected PC

Sometimes it takes real world frustrations before you can really appreciate the advances that technology can bring. Such is the case with mobile broadband-equipped notebook PCs.

Before diving into the details of why I’m saying this, I have to admit upfront that I’ve been a skeptic of cellular-capable notebooks for a very long time. As a long-time observer of, data collector for, and prognosticator on the PC market, I clearly recall several failed attempts at trying to integrate cellular modems into PCs over the last 15 years or so. From the early days of 3G, and even into the first few years of 4G-capable devices, PC makers have been trying to add cellular connectivity into their devices. However, attach rates in most parts of the world (Western Europe being the sole exception) have been extremely low—typically, in the low single digits.

The primary reasons for this limited success have been cost—both for the modem and cellular services—as well as the ease and ubiquity of WiFi and personal hotspot functions integrated into our smartphones. Together, these factors have put the value of cellular connectivity into question. It’s often hard to justify the additional costs for integrated mobile broadband, especially when the essentially “free” alternatives seem acceptable.

Despite all these concerns, however, we’ve seen a great deal of fresh attention being paid to cellular connected PCs of late. Specifically, the launch of the always connected PC (ACPC) effort by Microsoft, Qualcomm, and several major PC OEMs (HP, Asus, and Lenovo) this time last year brought new attention to the category and started to shift the discussion of PC performance towards connectivity, in addition to traditional CPU-driven metrics. Since that first launch with Snapdragon 835-based devices, we’ve already seen second generation Snapdragon 850-based PCs, such as Lenovo’s Yoga C630, start to ship.

We’ve also seen Intel bring its own modems into the PC market in a big way over the last few months, highlighting the increased connectivity options they enable. In the new HP Spectre Folio leather-wrapped PC, for example, Intel created a multi-chip module that integrates its Amber Lake Y-Series CPU, along with an XMM 7560 Gigabit LTE modem. Conceptually, it’s similar to the chiplet-style design that combined an Intel CPU and AMD Radeon GPU into a single multi-chip module that Dell used in its XPS 15 earlier this year, but integrates a discrete modem instead of the discrete GPU.

Together these efforts, as well as expected advancements, highlight the many key technological enhancements in semiconductor design that are being directed towards connectivity in PCs. Plus, with the launch of 5G-capable modems and 5G-enabled PCs on the very near horizon, it’s clear that we’ll be enjoying even more of these chip design-driven benefits in the future.

Even more importantly, changes in the wireless landscape and our interactions with it are bringing a new sense of pertinence and criticality to our wireless connections. While we have been highly dependent on wireless connections in our PCs for some time, the degree of dependence has now grown to the point where most people really do need (and expect) reliable, high-quality signals all the time.

This point hit home recently after I had boarded a plane but needed to finish a few critical emails before we took off. Unfortunately, the availability and quality of WiFi connections while people are getting seated is dicey at best. But by leveraging the integrated cellular modem in my Spectre Folio review unit, I was able to do so no problem. Similarly, in a long Lyft ride to an airport on another recent trip, I leveraged the modem in the Yoga C630 for similar purposes. Plus, in situations like conferences and other events where WiFi connections are often spotty, having a cellular connectivity alternative can be the difference between having a usable connection and not having one at all.

Admittedly, these are first-world problems and not everybody needs to have reliable connectivity in these types of limited situations. In other words, I don’t think the extra cost of integrated cellular modems makes sense for everyone. But, for people who are on the run a lot, the extra convenience can really make a difference. This is another example of the fact that many of the technological advances that we now see in the PC market are generally more incremental and meant to improve certain situations or use cases. Integrated cellular connections are in line with this kind of thinking as they provide an incremental boost in the ability to find a usable internet connection.

In addition to convenience, the increase of WiFi network-based security risks has raised concerns about using public WiFi networks in certain environments. While not perfect, cellular connections are generally understood to be more secure and much less vulnerable to any kind of network snooping than WiFi, providing more peace of mind for critical or sensitive information.

Of course, little of this would matter if network operators didn’t make pricing plans for cellular data usage on PCs attractive. Thankfully, there have been improvements here as well, but there’s still a long way to go to truly make this part of the connected PC experience friction-free. The expected 2019 launch of 5G-equipped notebooks will likely trigger a fresh round of pricing options for connected PC data plans, so it will be interesting to see what happens then.

Ultimately, while some of the primary concerns around the connected PC remain, it’s also becoming clear that many other issues are starting to paint the technology in a light. Always on, always reliable connections are no longer just a “nice to have,” but a “need to have” for many people, and along with the technology advancements, increased security and lower data plan costs are combining to create an environment where connected PCs finally start to make sense.

Podcast: Amazon AWS reInvent, HP Inc. and Dell Earnings, Apple Music and Amazon

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing multiple announcements from Amazon’s AWS re:Invent conference, including the launch of several new custom chips, discussing the impact of HP’s and Dell’s earnings and what it means for the PC market, and chatting about the new agreement that will let Apple Music work on Amazon Echo devices.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Robots Ready to Move Mainstream

Are the robots coming, or are they already here? Fresh off the impressive, successful Mars landing of NASA’s InSight lander robotic spacecraft, it seems appropriate to suggest that robots have already begun to make their presence felt across many aspects of our lives. Not only in space exploration and science, but as we enter into the holiday shopping season, their presence is being felt in industry and commerce as well.

Behind the scenes at factories building many of the products in demand this holiday season, to the warehouses that store and ship them out, robots have been making a significant impact for quite some time. Building on that success, both Nvidia and Amazon recently made announcements about robotics-related offerings intended to further advancements in industrial robots.

Just outside of Shanghai last week, at the company’s GTC China event, Nvidia announced that Chinese e-commerce giants JD.com and Meituan have both chosen to use the company’s Jetson AGX Xavier robotics platform for the development of next-generation autonomous delivery robots. Given the expected growth in online shopping in China, both e-commerce companies are looking to develop a line of small autonomous machines that can be used to deliver goods directly to consumers, and they intend to use Xavier and its associated JetPack SDK to do so.

At the company’s AWS:Invent event in Las Vegas this week, Amazon launched a cloud-based robotics test and development platform called AWS RoboMaker that it’s making available through its Amazon Web Services cloud computing offering. Designed for everyone from robotics students who compete in FIRST competitions through robotics professionals working at large corporations, RoboMaker is an open-source tool that leverages and extends the popular Robot Operating System (ROS).

Like some of Nvidia’s software offerings, RoboMaker is designed to ease the process of programming robots to perform sophisticated actions that leverage computer vision, speech recognition, and other AI-driven technologies. In the case of RoboMaker, those services are provided via a connection to Amazon’s cloud computing services. RoboMaker also offers the ability to manage large fleets of robots working together in industrial environments or places like large warehouses (hmm…wonder why?!)

The signs of growing robotic influence have been evident for a while in the consumer market as well. The success of Roomba robotic vacuums, for example, is widely heralded as the first step in a home robotics revolution. Plus, with the improvements that have occurred in critical technologies such as voice recognition, computer vision, AI, and sensors, we’re clearly on the cusp of what are likely to be some major consumer-focused robotics introductions in 2019. Indeed, Amazon is heavily rumored to be working on some type of home robot project—likely leveraging their Alexa work—that’s expected to be introduced sometime next year.

Robotics is also a key part of the recent renaissance in STEM education programs, as it allows kids of many ages to see the fun, tangible efforts of their science, math, and engineering-related skills brought to life. From the high-school level FIRST robotics competitions, down to early grade school level programs, future robotics engineers are being trained via these types of activities every day in schools around the world.

The influence of these robotics programs and the related maker movement developments have reached into the mainstream as well. I was pleasantly surprised to see a Raspberry Pi development board and other robotics-related educational toys in stock and on sale at, of all places, my local Target over the Black Friday shopping weekend.

The impact of robots certainly isn’t new in either the consumer or business world. However, except for a few instances, real-world interactions with them have still been limited for most people. Clearly, that’s about to change, and people (and companies) are going to have to be ready to adapt. Like the AI technologies that underlie a lot of the most recent robotics developments, there are some great opportunities, but also some serious concerns, particularly around job replacement, that more advanced robotics will bring with them. The challenge moving forward will be determining how to best use robots and robotic technology in ways that can improve the human experience.