Will Apple IBM Deal Let Watson Replace Siri For Business Apps?

Even though it wasn’t the first time that Apple and IBM have announced partnerships in the enterprise space, as a long-time tech industry observer, there’s still part of me that finds it surprising to see an Apple executive speak at an IBM event.

Such was the case at last week’s IBM Think conference in Las Vegas, where the two announced that IBM’s Watson Services was going to be offered as an extension to Apple’s CoreML machine learning software. Essentially, for companies who are creating custom mobile applications for iPhones (and iPads), the new development means that enterprises can get access to IBM’s Watson AI tools in their iOS business applications.

On the surface, it’s easy to say that this is just an extension of some of the work IBM and Apple announced several years back to bring some of IBM’s industry-specific vertical applications to the iPad. In some ways, it is.

But in many other ways, this announcement is arguably more important and will generate more long-term impact than whatever new products, software and/or services that Apple announces later today at their education event in Chicago.

The reasons are several. First, the likely focus of much of this work will be on the iPhone, which has a larger and more important presence in businesses throughout the world than iPads. Depending on who you ask, iPhones have nearly 50% share in US smartphones in business, for example, which is several points higher than their share of smartphones in the overall US market.

More important than that, however, is the new dynamic between Apple’s machine learning software and the capabilities offered by IBM. At a basic level, you could argue that there may be future battles between Siri and Watson. Given all the difficulties Apple has had with Siri, versus the generally much more positive reaction to Watson, that could prove to be a significant challenge for Apple.

The details of the agreement specify that Watson Services for CoreML will allow applications created for iOS to leverage pre-trained machine learning models/algorithms created with IBM’s tools as a new option. As part of CoreML, Apple already offers machine learning models and capabilities of its own, as well as tools to convert models from popular neural network/machine learning frameworks, such as Caffe and TensorFlow, into CoreML format. The connection with IBM brings a higher level of integration with external machine learning tools than Apple has offered in the past.

Initially, the effort is being focused on the visual recognition tools that IBM has made available through its Watson services. Specifically, developers will be able to use Watson Visual Recognition to add computer vision-style capabilities to their existing apps. So, for example, you could point your iPhone’s camera at an object and have the application recognize it and provide exact details about specific characteristics, such as determining a part number, recognizing whether a piece of fruit is ripe, etc. What’s interesting about this is that Apple already has a Vision framework for letting you do similar types of things, but this new agreement essentially lets you swap in the IBM version to leverage their capabilities instead.

IBM also has voice-based recognition tools as part of Watson Services that could theoretically substitute for Apple’s Foundation Natural Language Processing tools that sit at the heart of Siri. That’s how we could end up with some situations of Siri vs. Watson in future commercial business apps. (To be clear, these efforts are only for custom business applications and are not at all a general replacement for Apple’s own services, which will continue to focus on Siri for voice-driven interactions in consumer applications.) The current announcement specifically avoids mentioning voice-based applications, but knowing that ongoing machine learning efforts between Apple and IBM are expected to grow, it’s not too hard to speculate.

If you’re wondering why Apple would agree to creating this potential internal software rivalry, the answer is simple: legacy. Despite earlier efforts between the two companies to drive the creation and adoption of custom iOS business applications, the process has moved along slowly, in large part because so much of the software that enterprises already have is in older “legacy” formats that is difficult to port to new environments. By working with IBM more closely, Apple is counting on making the process of moving from these older applications or data sets to newer AI-style machine learning apps significantly easier.

Another interesting aspect about the new Apple IBM announcement is the IBM Cloud Developer Console for Apple, which is a simple, web-based interface that lets Apple developers start experimenting with the Watson services and other cloud-based services offered by IBM. Using these tools, for example, lets you build and train your own models in Watson, and even create an ongoing training loop that lets the on-phone models get smarter over time. In fact, what’s unique about the arrangement is that it lets companies bridge between Apple’s privacy-focused policies of doing on-device inferencing—meaning any incoming data is processed on the phone without sending data to the cloud—and IBM’s focus on enterprise data security in the cloud.

Another potentially interesting side note is that, because IBM just announced a deal with Nvidia to extend the amount of GPU-driven AI training and inferencing that IBM is doing in their cloud, we could see future iOS business apps benefitting directly from Nvidia chips, as those apps connect to IBM’s Nvidia GPU-equipped cloud servers.

More than anything, what the news highlights is that in the evolution of more sophisticated tools for enterprise applications, it’s going to take many different partners to create successful mobile business applications. Gone are the days of individual companies being able to do everything on their own. Even companies as large as Apple and IBM need to leverage various skill sets, work through the legacy of existing business applications, and provide access to programming and other support services from multiple partners in order to really succeed in business—even if it does make for some friendly competition.

Edge Servers Will Redefine the Cloud

Talk to most people about servers and their eyes start to glaze over. After all, if you’re not an IT professional, it’s not exactly a great dinner party conversation.

The truth is, in the era of cloud-driven applications in which we now live, servers play an incredibly vital role, functioning as the invisible computing backbone for the services upon which we’ve become so dependent.

Most servers live either in large cloud-hosting sites or within the walls of corporate data centers. The vast majority of them are Intel x86-based computing devices that are built similarly to and essentially function like large, powerful PCs. But that’s about to change.

Given the tremendous growth in the burgeoning world of edge computing—where computing resources are being pushed out towards the edge of the network and closer to us and our devices—we’re on the cusp of some dramatic changes in the world of servers. The variations are likely to come in the size, shape, capabilities, number, and computing architecture of a whole new category of devices that some have started to call gateways or, in more powerful forms, edge servers.

The basic idea driving edge computing is that current centralized cloud computing architectures are simply not efficient enough for, nor capable of, meeting the demands that we will soon be placing on them. Thanks to new types of applications—everything from voice-based personal assistants that use the cloud for translation, to increasingly connected cars that use the cloud for mapping and other autonomous features—as well as the continued growth of existing applications, such as streaming media, there’s an increasing recognition that new types of computing infrastructure are necessary. Distributing more computing intelligence out to the edge can reduce latencies and other delays, improve network efficiencies, reduce costs, enhance privacy, and improve overall capacity and performance for intelligent services and the connected devices which rely on them.

Because this intelligence is going to be needed in so many places, for so many devices, the opportunity for edge servers will be tremendous. In some instances, these edge servers may end up being downsized versions of existing servers, with similar architectures, similar applications, and similar types of nearby connected infrastructure components, such as storage and networking.

In many more cases, however, edge computing applications are likely going to demand a different type of server—at many levels. One likely scenario is best exemplified by hyperconverged server appliances, which essentially provide the equivalent of a complete data center in a single box, offering intelligent software-controlled storage and networking components, in addition to the critical compute pieces. The beauty of hyperconverged devices is that they require significantly less space and power than traditional servers, but their software-based architectures make them just as flexible as large data centers. This will be critical for edge servers because the need to have them be reconfigured on the fly to meet rapidly shifting application demands will be essential.

Another likely scenario is a shift towards other types of computing architectures. While Intel-based x86 dominates the very conservative traditional server market, the fresh approach that edge-based applications servers and applications are likely to take removes the onus of legacy support. This will free companies to choose the types of architectures best suited to these new applications. A clear potential winner here is Arm, whose power-efficient designs could find a whole new set of opportunities in cracking the server market for edge-based devices. A number of vendors, including HPE, Cavium and others are just starting to deploy Arm-based servers and edge computing applications will likely be a strong new market for these products.

Even within x86, we’ll likely see variations. With AMD’s well received Epyc line of server chips, there will likely be more acceptance of it in edge server applications. In addition, because many edge computing applications are going to be connected with IoT (Internet of Things) devices, new types of data and new types of analytics applications are going to become increasingly important. A lot of these new applications will also be strong users of machine learning and artificial intelligence. Nvidia has built a strong business in providing GPUs to traditional servers for these kinds of AI and machine applications already and they’ll likely see even more use in edge servers.

On top of GPUs, we’ll likely see the introduction of other types of new architectures in these new, edge servers. Because they’re different types of servers, running new types of applications, they’re the perfect place for vendors to integrate other types of chip architectures, such as the AI-specific chips that Intel’s Nervana group is working on, as well as a host of others.

Software integration is also going to be critical for these new edge servers, as some companies will opt to transition existing cloud-based applications to these new edge servers, some will build tools that serve as feeders into cloud-based applications, and some will build new applications entirely, taking advantage of the new chip architectures that many of these new servers will contain. This is where companies like IBM have an opportunity to leverage much of their existing cloud and IoT work into products and services for companies who want to optimize their applications for the edge.

Though most of us may never physically see it, or even notice it, we are entering a phase of major disruption for servers. The degree of impact that edge-focused servers will ultimately have is hard to predict, but question of whether that impact will be real is now a foregone conclusion.

Podcast: Digital Assistants, AMD Chip Flaws, Apple Education Event, Fitbit

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing recent developments around digital assistants such as Apple’s Siri, discussing AMD chip flaws, chatting about the upcoming Apple Education event, and talking about Fitbit’s new smartwatch.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Is it Too Late for Data Privacy?

The numbers are staggering. Last year’s Equifax breach, along with more recent additions, have resulted in nearly 150 million Americans—more than half of all those 18 and older—having essential identity data exposed, such as Social Security numbers, addresses, and more. And that’s just in the past year. In 2016, 2.2 billion data records of various types were poached via Internet of Things (IoT) devices—such as smart home products. Just yesterday, a judge ruled that a class action case against Yahoo (now part of Verizon) regarding the data breach of all 3 billion (yes, with a “B”) of its Yahoo mail accounts could proceed. Is it any wonder that according to a survey by the National Cybersecurity Alliance, 68% of Americans don’t trust brands to handle their personal information appropriately?

The situation has become so bad, in fact, that there are some who are now questioning whether the concept of personal privacy has essentially disappeared into the digital ethers. Talk to many young people (Gen Z, Millenials, etc.) and they seem to have already accepted that virtually everything about their lives is going to be public. Of course, many of them don’t exactly help their situation, as they readily share staggering amounts of intimate details about their lives on social media and other types of applications, but that’s a topic for another day.

Even people who try to be cautious about their online presence are starting to realize that there’s a staggering amount of information available about virtually every one of us, if you bother to look. Home address histories, phone numbers, employment histories, group affiliations, personal photos, pet’s names, web browsing history, bank account numbers, and yes, Social Security numbers are all within relatively easy (and often free) reach for an enormous percentage of the US population.

Remember all those privacy tips about shredding your mail or other paper documents to avoid getting your identity stolen? They all seem kind of quaint (and, unfortunately, essentially useless) now, because our digital footprints extend so much farther and deeper than any paper trail could possibly go that I doubt anyone would even bother trying to leverage paper records anymore.

While it may not be popular to say so, part of the problem has to do with the enormous amounts of time that people spend on social media (and social media platforms themselves). In fact, according to a survey of cyberstalkers reported by the Identity Theft Resource Center, 82% of them use social media to gather the critical personal information they need to perform their identity thefts against potential victims.

My perspective on the extent of the problem with social media really hit home a few weeks ago as I was watching, of all things, a travel program on TV. Like many of these shows, the host was discussing interesting places to visit in various cities—in this case, one of them was a museum in Nuremberg, Germany dedicated to the Stasi, the infamous (and now defunct) secret police of former East Germany. A guide from the museum was describing the tactics this nefarious group would use to collect information on its citizens: asking friends and family to share the activities of one another, interceding between people writing to each other, secretly reading letters and other correspondence before they got passed along, and so on.

The analogies to modern social media, as well as website and email tracking, to generate “personalized” ads, were staggering. Of course, the difference is that now we’re all doing this willingly. Plus, today it’s in easily savable, searchable, and archivable digital form, instead of all the paper forms they used to organize into physical folders on everyone. Frankly, the information that many of our modern digital services are creating is something that these secret police-type organizations could have only dreamt about—it’s an Orwellian tragedy of epic proportions.

So, what can we do about it? Well, for one, we all need to pull our collective heads out of the sand and acknowledge that it’s a severe problem. But beyond that, it’s clear that something needs to be done from a legislative or regulatory perspective. I’m certainly not a fan of governmental intervention, but for an issue as pervasive and unlikely to change as this one, there seems little choice. (Remember that companies like Facebook, Google and others are making hundreds of billions of dollars every year leveraging some of this data for advertising and other services, giving them absolutely zero incentive to adjust on their own.)

One interesting idea to start with is the concept of data labelling, a la the food labelling standards now in place. With data labelling, any online service, website, application or other data usage would be required to explain exactly what information they were collecting, what it was used for, who it was sold to, etc., all in plain, simple language in a very obvious location. Of course, there should also be options that disallow the information from being shared. In addition, an interesting twist might be the potential to leverage blockchain technology to let each person control and track where their information went and potentially even financially benefit from its sale.

The problem extends beyond the more obvious types of information to location data as well. In fact, even if all the content of any online activity you did was blocked, it turns out that a tremendous amount of information can be gathered just by tracking your location on a regular, ongoing basis, as the January story about the tracking US military personnel through their Strava/Fitbit wearables fitness apps so glaringly illustrated. Even outside military situations, the level of location tracking that can be done through a combination of smartphones, GPS, connected cars, ride sharing applications, WiFi networks, Bluetooth, and more is staggering, and there’s currently no legislation in place to prevent that data from being used without your permission.

All of us can and should be smarter about how we spend our time online, and there are organizations like Staysafeonline.org that offer lots of practical tips on things you can do. However, the issues go way beyond simple tricks to help protect your digital identity. It’s time for Congress and other representatives to take a serious look at things they can do to protect our privacy and identity from the digital world in which we live. Even legislative efforts won’t solve all the data privacy issues we face, but the topic is just too important to ignore.

Podcast: Qualcomm-Broadcom, Gaming Industry Meets Trump, Machine Learning Software

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell analyzing the latest developments in the proposed Qualcomm-Broadcom merger, discussing this week’s meeting between gaming industry executives and President Trump, and chatting about the latest machine learning software developments from Microsoft, Intel, Arm, Qualcomm and others.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Hidden Technology Behind Modern Smartphones

Sometimes it’s not just the little things, but the invisible things that matter.

This is even true in the world of technology, where the focus on the physical characteristics of devices such as smartphones, or the visual aspects of the applications and services that run on them, is so dominant.

At the Mobile World Congress (MWC) trade show last week, the importance of this observation became apparent on many different levels. From technologies being introduced to create the invisible wireless networks that our smartphones are so dependent on, to the semiconductor chip innovations hidden inside our smartphones, to the testing required to make all of this work, these “invisible” developments were some of the biggest news to come out of the show.

On the network side, discussions focused entirely on the equipment necessary to create next-generation 5G networks and real-world timelines for delivering them. Historically, MWC was a telecom equipment show, and its heritage shown through strongly this year. Traditional network infrastructure companies such as Ericsson, Nokia, and Huawei were joined by relative newcomers to this particular area, including Intel and Samsung, to talk about how they’re planning to deliver 5G-capable equipment to telco operators such as AT&T, T-Mobile, and Verizon later this year.

The details behind 5G network equipment technologies, such as millimeter wave, network slicing, and others, become extraordinarily complex very quickly. The bottom line, however, is that they’re going to enable 5G networks to support not only significantly faster upload and download speeds for 5G-enabled smartphones, but also much more consistent speeds. This translates into smoother experiences for applications such as high-resolution video streaming, as well as new kinds of applications that haven’t been possible before, such as self-driving cars.

To make this work, new types of 5G-capable modems are needed inside next generation smartphones, automobiles and other devices, and that’s where chip companies like Qualcomm and Intel come in.

One of the great things about the forthcoming transition to 5G is that existing 4G LTE modems and the devices they currently are used in (specifically, all our current smartphones) will be able to connect to and work with these new 5G networks.

New telecom industry standards were specifically designed to initially put new 5G enhancements on top of existing 4G ones, thereby guaranteeing compatibility with existing devices. In fact, there are even some situations where our current smartphones will get a bit faster on 5G networks as 5G phones becomes available, because the new phones will essentially move their traffic onto new lanes (radio bands) of traffic, reducing congestion for existing 4G devices. Eventually, we will move beyond what is termed these “non-standalone” or NSA, generation networks to standalone (SA) 5G-only networks, but for the next several years, the two network “generations” will work together.

At last year’s MWC, Qualcomm introduced a prototype of the world’s first 5G-capable modem (the X50). With the recent finalization of the 5G radio standard (called 5G NR) last December, this year they discussed the first successful 5G tests using the X50 to connect to network equipment from providers like Ericsson and Nokia. More importantly, they announced that the X50 would be shipping later this year and that the first 5G-capable smartphones will be available to purchase in early 2019.

Intel and Huawei also joined the 5G modem fray this year. Intel discussed their own successful trials with their prototype 5G modems and said that they would provide both a 5G modem for PCs and, thanks to work with chip company Spreadtrum, a 5G modem and applications processor for smartphones by the end of 2019. Huawei’s new modem is much larger and won’t be in smartphones initially, but instead will be used for applications such as 5G-based fixed wireless broadband services for home or business internet connections.

Another “hidden” technology is the testing that’s necessary to make all these devices and networks work together. Companies like National Instruments (NI) have worked silently in the background for the last several years creating test equipment and test beds that allow chipmakers, device makers, network equipment makers and telecom carriers to assure that the new standards at the heart of 5G actually work in their simulations of real-world environments. At MWC last week, NI showed a new 5G NR radio emulator, a new millimeter wave test bed in conjunction with Samsung network equipment, and an analog RF front end for 5G modems done in conjunction with Qorvo.

As we bury ourselves in our daily smartphone usage, it’s easy to forget how much technology is working behind the scenes to ensure that they deliver all the capabilities to which we’ve become accustomed. While there’s little need for most people to think about how it all works, it’s still important to know that the kinds of “invisible” advancements that were presented at this year’s MWC offer a strong path for smartphones’ continued and growing use.

Podcast: MWC 2018, Samsung Galaxy S9, Qualcomm, Intel, Huawei 5G Modems

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing important developments from the Mobile World Congress trade show, analyzing the impact of Samsung’s new Galaxy S9 smartphones, and describing the 5G modem introductions from Qualcomm, Intel and Huawei and what they mean for the rollout of 5G.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Surprising Highlight of MWC: Audio

It’s certainly not what I expected. After all, when it comes to a show that’s traditionally focused on the telecom industry, audio typically just refers to voice.

But at this year’s Mobile World Congress (MWC) trade show in beautiful Barcelona, Spain, several of the biggest product announcements actually share a similar characteristic—a focus on sound and audio. In this case, it’s the surround sound technology from Dolby called Atmos. Both Samsung’s new flagship S9/S9+ phones and the Huawei Matebook X Pro notebook feature it, as well as the new Lenovo Yoga 730 notebook, which was also announced here at MWC. The convertible 2-in-1 Yoga 730 also extends its audio features through the integration of both the Alexa and Cortana digital assistants, as well as array microphones to allow you to use the device from a distance, similar to standalone smart speakers.

To be clear, all of these devices offer a variety of other important new technologies that go well beyond audio, but these sound-focused capabilities are interesting for several reasons. First, Dolby’s Atmos is a really cool, impressive technology that people will notice as being a unique feature of these products. But in addition, the inclusion of Atmos is the kind of more subtle type of improvements that are becoming the primary differentiator for new generations of products in mature product categories such as smartphones and PCs.

Walking through the halls of the convention center you could easily find collections of very nice-looking client devices amidst the telecom network equipment, IoT solutions, autonomous car technologies, and other elements that are a big part of MWC. What was impossible not to notice, however, is that they pretty much all looked the same, particularly smartphones. They’ve morphed their physical designs into little more than flat slabs of glass. Even many of today’s superslim notebook PC designs also have fairly similar designs. In both cases, the form factors of the devices are quite good, so this isn’t necessarily a bad development, but they are getting harder and harder to quickly tell apart from a visual perspective.

As a result, companies are having to incorporate interesting new technologies within their devices to offer some level of differentiation from their competition. That’s why the Dolby Atmos integration is an interesting reflection on the state of modern product development.

At its heart, Atmos is the next evolution of Dolby’s 30+ year history of creating surround sound formats and experiences. Originally developed for move theaters and now more commonly found on home audio products like soundbars and AV receivers from big consumer electronics vendors like Sony, Atmos offers two important additions and enhancements to previous versions of Dolby’s surround sound technologies. First, from an audio perspective, Atmos highlights the ability to position sounds vertically and horizontally, delivering an impressive 360˚ field of sound that really makes you feel like you’re in the center of whatever video (or gaming) content you happen to be watching. Previous iterations of Dolby (and rival DTS) technology have had some of these capabilities, but Atmos takes it to a new level in terms of performance and impact. The technology primarily achieves this through the concept of head-related transfer functions (HRTFs), which emulate how sounds enter our ears at slightly different times and are influenced by the environment around us.

The second big change for Atmos involves the audio file format. Unlike previous surround sound technologies, where the position of sounds was fixed, in Atmos sounds are described as objects and will sound slightly different depending on what types of speakers and audio system they’re being played through. Essentially, it optimizes the surround sound experience for specific devices and the components they have.

The effectiveness of this approach is very clear on Huawei’s Matebook X Pro, where the Dolby Atmos implementation leverages the four different speakers put into the notebook. One critique of surround sound technologies is that they’re effectiveness can be dramatically impacted by where you are sitting. Basically, you really need to be in the sound “sweet spot” to get the full impact of the effect. What’s interesting about the implementation of Atmos in a notebook is that it’s almost impossible to not be in the sweet spot if you’re viewing content directly on the screen in front of you. As a result, the audio experience with Dolby Atmos-enabled content on the Matebook X is extremely impressive—it’s actually a second-generation implementation for Huawei and Dolby and it’s quite effective.

For mobile devices like the new Samsung S9/S9+, Dolby Atmos can be delivered both through stereo speakers (a feature that many smartphones still don’t have) or, even more effectively, through a headphone jack. In fact, the implementation of Atmos on the S9 is probably the most effective argument in favor of having/needing a headphone jack on a smartphone that I’ve seen. With headphones, you get a truly immersive surround sound experience with Atmos-enabled content on the S9/S9+, and through the speakers on either end of the S9/S9+ you also get an audio experience that’s much better than with most other smartphones.

In the case of the Lenovo Yoga 730, the Atmos implementation is only via the headphone jack, but once connected to a standard set of headphones, it gives you the same kind of virtual surround experience of the other devices.

Admittedly, not everyone cares about high-quality audio as much as I do but given how much video content we consume on our devices, either through streaming services like Netflix, or just as part of our social media or news feeds, I believe it can be an important differentiator for vendors who deploy it. Plus, it’s important to set our expectations for the kinds of advancements that the next few generations of our devices are likely going to have. They may not be as dramatic as folding screens, but technologies like Dolby Atmos can certainly improve the overall experience of using our devices.

The Blurring Lines of 5G

On the eve of the world’s largest trade show dedicated to all things telecom—Mobile World Congress (MWC), which will be held next week in beautiful Barcelona, Spain—everyone is extraordinarily focused on the next big industry transition: the move to 5G.

The interest and excitement about this critical new network standard is palpable. After years of hypothetical discussions, we’re finally starting to see practical test results, helped along by companies like National Instruments, being discussed, and realistic timelines being revealed by major chip suppliers like Qualcomm and Intel, phone makers like Samsung, network equipment providers like Ericsson, as well as the major carriers, such as AT&T and Verizon. To be clear, we won’t be seeing the unveiling of smartphones with 5G modems that we can actually purchase, and the mobile networks necessary to support them until around next year’s show—and even those will be more bleeding edge examples—but we’ve clearly moved past the “I’m pretty sure we’re going to make it” stage to the “let’s start making plans” stage. That’s a big step for everyone involved.

As with the transition from 2G to 3G and 3G to 4G, there’s no question that the move to 5G is also a big moment. These industry transitions only occur about once a decade, so they are important demarcations, particularly in an industry that moves as fast as the tech industry does.

The transition to 5G will not only bring faster network connection speeds—as most everyone expects—but also more reliable connections in a wider variety of places, particularly in dense urban environments. As connectivity has grown to be so crucial for so many devices, the need for consistent connections is arguably even more important than faster speeds, and that consistency is one of the key promises that we’re expecting to see from 5G.

In addition, the range of devices that are expected to be connected to 5G networks is also growing wider. Automobiles, in particular, are going to be a critical part of 5G networks in the next decade, especially as more assisted, semi-autonomous and (eventually) fully autonomous cars start relying on critical connections between cars and with other road-related infrastructure to help them function more safely.

As exciting as these developments promise to be, however, it’s also becoming increasingly clear that the switchover from 4G to 5G will be far from a clean, distinct break. In fact, 5G networks will still be very dependent on 4G network infrastructure—not only in the early days when 5G coverage will be limited and 4G will be an essential fallback option—but even well into the middle of the 2020s and likely beyond.

A lot of tremendous work has been done to build a robust 4G LTE network around the world and the 5G network designers have wisely chosen to leverage this existing work as they transition to next generation standards. In fact, ironically, just before the big 5G blowout that most are expecting at this year’s MWC trade show, we’re seeing some big announcements around 4G.

The latest modem from Qualcomm—the X24, for example, isn’t a 5G model (though they’re previously announced X50 modem is expected to be the first commercial modem that will comply with the recently ratified 5G NR “New Radio” standard), but rather a further refinement of 4G. Offering theoretical download speeds of up to 2 Gbps thanks to 7X carrier aggregation—a technology that allows multiple chunks of radio bandwidth to function as a single data “pipe”—the X24 may likely, in fact, offer even faster connection speeds than early 5G networks will enable. In theory, first generation 5G networks should go up to 4 Gbps and even higher, but thanks to the complexities of network infrastructure and other practical realities, real-world numbers are likely to be well below that in early incarnations.

Of course, this is nothing new. In other major network transitions, we saw relatively similar phenomena, where the last refinements to the old network standards were actually a bit better than the first iterations of the new ones.

In addition, a great deal of additional device connectivity will likely be on networks other than official 5G for some time. Standards like NB-IoT and Cat M1 for Internet of Things (IoT) applications actually ride on 4G LTE networks, and there’s little need (nor any serious standards work being done yet) to bring these over to 5G. Even in automotive, though momentum is rapidly changing, the “official” standard for vehicle-to-vehicle (V2V) connections in the US is still DSRC, and the first cars with it embedded into them just came out this year. DSRC is a nearly 20-year old technology, however, and was designed well before the idea of autonomous cars became more of a reality. As a result, it isn’t likely to last as the standard much longer given the dramatically increased network connectivity demands that even semi-autonomous automobiles will create. Still, it highlights yet another example of the challenges to evolve to a truly 5G world.

There is no question that 5G is coming and that it will be impactful. However, it’s important to remember that the lines separating current and next generation telecom network standards are a lot blurrier than they may first appear.

Podcast: WiFi Mesh, Qualcomm X24 Modem, Arm Trillium AI Chips, AMD Zen Deskop APUs

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing WiFi mesh home routers, Qualcomm’s new 2 Gbps X24 LTE Modem, Arm’s Trillium AI chips, and AMD’s new Zen desktop APUs.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Modern State of WiFi

So easy to take for granted, yet impossible to ignore. I’m speaking, of course, of WiFi, the modern lifeblood of virtually all our tech devices. First introduced as a somewhat odd—it’s commonly believed to be short for Wireless Fidelity—marketing term in 1999, the wireless networking technology leverages the 802.11 technical standards—which first appeared in 1997.

Since then, WiFi has morphed and adapted through variations including 802.11b, a, g, n, ac, ad, and soon, ax and ay, among others, and has literally become as essential to all our connected devices as power. Along the way, we’ve become completely reliant on it, placing utility-like demands upon its presence and its performance.

Unfortunately, some of those demands have proven to be ill-placed as WiFi has yet to reach the ubiquity, and certainly not the consistency, of a true utility. As a result, WiFi has become the technology that some love to hate, despite the incredibly vital role it serves. To be fair, no one really hates WiFi—they just hate when it doesn’t work the way they want and expect it to.

Part of the challenge is that our expectations for WiFi continue to increase—not only in terms of availability, but speed, range, number of devices supported, and much more. Thankfully, a number of both component technology and product definition improvements to help bring WiFi closer to the completely reliable and utterly dependable technology we all want it to be have started to appear.

One of the most useful of these for most home users is a technology called WiFi mesh. First popularized by smaller companies like Eero nearly two years, then supported by Google in its home routers, WiFi mesh systems have become “the” hot technology for home WiFi networks. Products using the technology are now available from a wide variety of vendors including Netgear, Linksys, TP-Link, D-Link and more. These WiFi mesh systems consist of at least two (and often three) router-like boxes that all connect to one another, boosting the strength of the WiFi signal, and creating more efficient data paths for all your devices to connect to the Internet. Plus, they do so in a manner that’s significantly simpler to set up than range extenders and other devices that attempt to improve in-home WiFi. In fact, most of the new systems configure themselves automatically.

From a performance perspective, the improvements can be dramatic, as I recently learned firsthand. I’ve been living with about a 30 Mbps connection from the upstairs home office where I work down to the Comcast Xfinity home gateway providing my home’s internet connection, even though I’m paying for Comcast’s top-of-the-line package that theoretically offers download speeds of 250 Mbps. After I purchased and installed a three-piece Netgear Orbi system from my local Costco, my connection speed over the new Orbi WiFi network jumped by over 5x to about 160 Mbps—a dramatic improvement, all without changing a single setting on the Comcast box. Plus, I’ve found the connection to be much more solid and not subject to the kinds of random dropouts I would occasionally suffer through with the Xfinity gateway’s built-in WiFi router.

In addition, there were a few surprise benefits to the Netgear system that—though they may not be relevant for everyone—really sealed the deal for me. In another upstairs home office, there’s a desktop PC and an Ethernet-equipped printer, both of which had separate WiFi hardware. The PC used a USB-based WiFi adapter and the printer had a WiFi-to-Ethernet adapter. Each of the “satellite” routers in the Orbi system have four Ethernet ports supporting up to Gigabit speeds, allowing me to ditch those flaky WiFi adapters and plug both the PC and printer into a rock-solid, fast Ethernet connection on the Orbi. What a difference that made as well.

The technology used in the Netgear Orbi line is called a tri-band WiFi system because it leverages three simultaneously functioning 802.11 radios, one of which supports 802.11b/g/n at 2.4 GHz for dedicated connections with older WiFi devices, and two of which support 802.11a/n/ac at 5GHz. One of the 802.11ac-capable radios handles connection with new devices, and the other is used to connect with the other satellite routers and create the mesh network. The system also uses critical technologies like MU-MIMO (Multi-User, Multiple Input, Multiple Output) for leveraging several antennas, and data compression methods like 256 QAM (Quadrature Amplitude Modulation) to improve data throughput speeds.

Looking ahead in WiFi technology from a component perspective, we’ve started to see the introduction of pre-standard silicon for the forthcoming 802.11ax standard, which offers some nominal speed improvements over existing 802.11ac, but is more clearly targeted at improving WiFi reliability in dense environments, such as large events, tradeshows, meetings, etc. There’s also been some discussion about 802.11ay, which is expected to operate in the 60 GHz band for high speeds over short distances, similar to the current 802.11ad (formerly called WiGig) standard.

As with previous generations of WiFi, there will be chips from companies like Qualcomm that implement a pre-finalized version of 802.11ax for those who are eager to try the technology out, but compatibility could be limited, and it’s not entirely clear yet if devices that deploy them will be upgradable when the final spec does get released sometime in 2019.

The bottom line for all these technology and component improvements is that even at the dawn of the 5G age, WiFi is well positioned for a long, healthy future. Plus, even better, these advancements are helping the standard make strong progress toward the kind of true utility-like reliability and ubiquity for which we all long.

Podcast: Apple HomePod, Google-Nest Integration, Twitter and Nvidia Earnings

This week’s Tech.pinions podcast features Ben Bajarin, Carolina Milanesi and Bob O’Donnell discussing Apple’s HomePod smart speaker, the re-integration into Google of the Nest smart home products business, and the quarterly earnings for Twitter and Nvidia.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Wearables to Benefit from Simplicity

Sometimes simplicity really is better—especially for tech products.

Yet, we’ve become so accustomed and conditioned to believe that tech products need to be sophisticated and full-featured, that our first reaction when we see or hear about products with limited functionality is that they’re doomed to failure.

That’s certainly been the case for a while now with wearables, an ever-evolving category of devices that has been challenged to match the hype surrounding it for 5-plus years. Whether head-worn, wrist-worn, or ear-worn, wearables were supposed to be the next big thing, in large part because they were going to do so many things. In fact, there were many who believed that wearables were going to become “the” personal computing platform/device of choice. Not surprisingly, expectations for sales and market impact have been very large for a long time.

Reality, of course, has been different. It’s not that wearables have failed as a category—far from it—but they’ve certainly had a slower ramp than many expected. Still, there are signs that things are changing. Shipments of the Apple Watch, which dominates many people’s definition of the wearable market, continue to grow at an impressive rate for the company. In fact, some research firms (notably IDC) believe the numbers surpassed a notable marker this past quarter, out-shipping the collective output of the entire Swiss watch industry in the same period. Now, whether that’s really the most relevant comparison is certainly up for discussion, but it’s an impressive data point nonetheless.

Initially, the Apple Watch was positioned as a general-purpose device, capable of doing a wide range of different things. While that’s certainly still true, over time, the product’s positioning has shifted towards a more narrowly focused set of capabilities—notably around health and fitness. While it’s hard to specifically quantify, I strongly believe that the narrower device focus, and the inherent notion of simplicity that goes along with it, have made significant contributions to its growing success. When it comes to wearables, people want simple, straightforward devices.

With that in mind, I’m intrigued by news of a new, very simple glasses-based wearable design from an organization at Intel called the New Devices Group. These new glasses, called Vaunt, look very much like traditional eyeglasses, but feature a low-power laser-based display that shoots an image directly onto your retina. Enabled by a VCSEL (vertical-cavity surface-emitting laser) and a set of mirrors embedded into the frame, the Vaunt creates a simple, single-color LED-like display that appears to show up in the lower corner of your field of view, near the edge of your peripheral vision, according to people who have tried it. (Here’s a great piece describing it in detail at The Verge.)

There are several aspects of the device that seem intriguing. First, of course, the fact that it’s a simple, lightweight design, not a bulky or unusual design, and therefore draws no attention to itself, is incredibly important. In an era when every device seems to want to make a statement with its presence, the notion of essentially “invisible” hardware is very refreshing. Second, the display’s very simple capabilities essentially prevent it from overwhelming you with content. Its current design is only meant to provide simple notifications or a small amount of location or context-based information.

In addition, while details are still missing, there doesn’t seem to be a major platform-type effort, but rather a simple set of information services that could theoretically be embedded into the device from the start, or slowly added to it in an app-life fashion that seems more akin to how skills get added to an Amazon Alexa. So, for example, it could start out by providing the same kind of notifications you currently get on your phone, but start to add location-based services, such as directions, or simple ratings for restaurants that are literally right in front of you, as well as intelligent contextual information about a person you might be having a conversation with.

Key to all of this, however, is that the design intentionally minimizes the impact of the display by putting it out of the way, allowing it to essentially disappear when you don’t actively look for it. That ability to minimize the impact of the technology—both functionally and visibly—as well as to intentionally limit its capabilities is a critically important and new way of thinking that I believe can drive wearables and other tech devices to new levels of success.

Ironically, as we look to the future evolution of tech devices, I think their ability to become more invisible will lead them to have a more pervasive, positive impact on our lives.

Smartphone Market Challenges Raise Major Questions

As dynamic and exciting as the smartphone market has been for many years, it’s hard to imagine a time when it just won’t matter that much to most people. Kind of like how many people now feel about the PC market.

Don’t get me wrong, the smartphone market will still be very large and extremely important to some people for quite a while (just as the PC market still is for many—myself included). But the truth is, we’re rapidly approaching the era of smartphone market maturation, and quite possibly, the end of smartphone market growth. Along with those changes are likely to come a shift in attention and focus away from smartphones, and towards other more “interesting” product categories—in the press, on people’s minds, and, most importantly, in critical industry technologies and developments.

The signs of this impending change are all around. In fact, you could argue that this is already starting to occur. While total 2017 worldwide smartphone shipment data may end up showing a modest increase over 2016 (final numbers have yet to be released), the fact that China—the world’s largest smartphone market—showed a 4% decline in Q4 2017 is a very telling and concerning indication of where the market is headed.

Essentially, what that data point tells us is that even in rapidly-growing markets, we’ve started to hit saturation. In other words, pretty much everyone who wants a smartphone now has one, and future market growth will be nearly completely dependent on the length of replacement cycles. Adding insult to injury, we’ve also started to see the extension of smartphone lifetimes from around 2 years to around 3 years in many parts of the world.

The reasons for these extended lifetimes are several, but there can be little doubt that many people are simply content with their current phones and don’t feel a pressing need to upgrade as frequently as they used to. Now that most people have large-screen smartphones, it’s easy to understand why.

But the implications from this shift are dramatic. Individual vendors who have been benefiting from overall industry growth are starting to see a much more challenging competitive environment in many regions around the world. From India to China to the US and beyond, major smartphone vendors are finding it much harder to enjoy the kind of comfortable growth to which they’ve become accustomed.

More specifically, for a company like Apple, the widely discussed topic of another “super cycle,” where a large group of existing customers upgrade to the newest phones, may prove to be a phantom phenomenon. It’s not inconceivable to think that the previous iPhone 6-driven growth spurt was really just a single point in history inspired by the initial transition to large screen phones. Recent reports of 50% reductions in iPhone X production certainly suggest that could be the case. (To be fair, many supply chain-related rumors turn out to be nothing more than that—rumors, with little connection to reality. So, we’ll need to wait until at least the calendar Q1 and maybe even Q2 shipments are released to know for sure.)

Challenges for the iPhone X and other high-end smartphones go well beyond just the appearance (or not) of a replacement “super cycle.” Smartphone maturation has also extended to product design and innovation, with new models from almost all vendors offering little more than incremental changes versus previous generations. Plus, even with new phones that have designs or capabilities that do take an arguably larger leap versus previous generations—such as with the iPhone X—it’s not clear everyone really wants those changes. Again, it seems many consumers are relatively content with their phones as they are.

Even more concerning longer term is the question of more advanced innovations. While there is no doubt that smartphone companies will keep working to improve their products—just as PC makers continue to do, even in an era of declining sales—as product categories mature, those advancements do tend to slow down. Most of the really exciting core technology developments tend to show up in newer product categories that are perceived as having a greater opportunity for future growth.

Smartphones are obviously not going away anytime soon—just as PCs continue to play a critical role for many. But whether it’s some futuristic AR glasses, nearly invisible “ambient” computing wearables, or some other types of devices we’ve yet to even imagine—there’s no doubt that at some point in the relatively near future, smartphones maturity and stability will make them seem like “old technology.” It isn’t a question of if, but only a question of when.

Podcast: Intel Earnings, Apple HomePod, National Cyber Security Alliance

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing Intel’s blowout quarterly earnings, the announcement of Apple’s HomePod smart speaker, and Data Privacy Day sponsored by the National Cyber Security Alliance.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Hardware-Based AI

As we start to come to grips with how rapidly, and profoundly, the Artificial Intelligence (AI)-driven revolution has started to impact the tech and consumer electronics industry, it’s worth thinking about near-term evolutions that will start making an impact this year.

At the recent CES show, AI was all around—from autonomous cars to voice-controlled, well, everything—and everybody wanted to get their product or service as closely associated with the technology as they could. In almost every case, the “AI” portion of the product was being powered and enabled by some type of internet connection to a large, cloud-based datacenter.

Whether specifically highlighted or silently presumed, the idea was, and still is, that the “hard work” of AI has to happen in the cloud. But, does it?

What we’re starting to see (or at least hear about) are products that can do at least some of the work of what’s called AI inferencing within the device itself, and without an internet connection. With inferencing, the device has to be able to react to a stimuli of some sort—whether that be, for instance, a spoken word or phrase, an image, a sensor reading—and provide an appropriate response.

The “intelligence” part of the equation comes from being able to recognize that stimuli as part of a pattern, or something that the system has “learned”. Typically, that learning, or training process, is still done in large datacenters, but what’s enabling the rapid growth of AI is the ability to reduce these “learnings” into manageable chunks of software that can run independently on disconnected hardware. Essentially, it’s running the inferencing portion of AI on “edge” devices.

Simultaneously, we’ve seen both the development of new, and adaption of existing, semiconductor chips that are highly optimized for running these pattern-matching neural networks behind modern AI. From DSP (digital signal processing) components inside larger SOCs, to dedicated FPGAs (field programmable gate arrays), to repurposed GPUs, and lower-power microcontrollers, there’s a huge range of new choices for enabling AI inferencing, even on very low-power devices.

Part of the reason for the variety is that there is an enormous range of potential AI applications. While many have focused on the very advanced applications, the truth is, there are many more simple applications that can be run across a broad range of devices. For example, if all I need for a particular application is a smart light switch that’s only ever going to respond to a very limited number of verbal commands, doesn’t it make more sense to embed that capability into a device and not depend on an internet connection? Multiply that example by thousands or even potentially millions of other very straightforward implementations of very simplistic AI, and it’s easy to see why so many people are getting excited about hardware-based AI.

Even on more advanced applications, like today’s personal-assistant driven smart speakers, the product’s evolution is moving towards doing more work locally on the device without an internet connection. Doing so enables faster response times, more customization, reduces network traffic, and if implemented intelligently, even enhances privacy by enabling some aspects of personalization and customization to remain on the local device and not be shared to the cloud.

Moving forward, the tough part is going to be determining how the recognition tasks get broken up so that some can be done locally and some in the cloud. Those are exactly the kinds of AI-based evolutions that we should expect this year and over the next several years. It’s going to take lots of clever new hardware and software to enable these scenarios, but companies who are successful should be able to provide more “intelligent” and more capable solutions that consumers (and businesses) are bound to adopt in much larger numbers.

To be clear, many implementations of AI are going to be dependent on network connections and large cloud-based datacenters, even for inferencing. But the opportunities for leveraging AI are so vast, and the range of applications (and computing requirements) so large, that there’s plenty of opportunity for many different levels of AI on many different types of hardware, powered by many different types of components. Translated into business potential, that also means there are strong opportunities for many different companies in the semiconductor, end product, software, services, and cloud-driven datacenter industries.

Figuring out how we evolve from a purely cloud-based AI model to one that increasingly relies on hardware is going to be an interesting path that’s likely to develop many different sub-routes along the way. In the end, however, these developments highlight how cloud-based computing continues to evolve, how devices on the “edge” are becoming increasingly important, and how hardware continues to play a critical role for the future of the tech industry.

The Tech Industry Needs Functional Safety

The tech industry’s infatuation with the automobile industry has become rather obvious over the last few years. Nearly everyone in tech, it seems, is dying to get involved with automotive, either on a component, high-level partnership, or even on a finished vehicle basis.

The reason, of course, is rather simple—it’s the money. As big and strong as the tech industry may be (counting combined revenues of PCs, smartphones, tablets and wearables), the automobile industry is still several times larger by most counts, with worldwide revenues reaching into the trillions of dollars per year.

In addition to the dollars, many in the tech business believe they can bring new capabilities and perspectives into the auto business. Put another way, there’s some pretty flagrant egoism in the tech business with regard to automotive, and many in tech believe they can help drag the traditional, and (in their minds) rather archaic, auto industry into the modern era.

While there may be some nugget of truth to that argument, the reality is that the auto industry actually has a lot it can teach the tech business, specifically around safety and reliability. The concept of functional safety—famously standardized around the ISO 26262 standard—in particular, is something the tech industry should really spend some time thinking about.

The specific requirements for functional safety are varied, but the concept essentially boils down to redundancy and back-up systems and capabilities. Given the potential impact on human lives, automobile makers and their critical suppliers have, for decades, had to create systems within cars that can fall back on an alternative in the event of a critical failure in a system within a modern car. Though it can be challenging to implement, it’s an extremely impressive idea that, conceptually at least, has potential applications in many areas outside the automotive industry, including essential utilities like the power grid, as well as increasingly essential tech components and tech devices.

Thankfully, many in the tech industry have started to catch on. In fact, one of the most impressive demonstrations at the recent CES show was Nvidia’s focus on functional safety in some of their latest components and systems designed for assisted and autonomous driving. The company’s CEO, Jensen Huang, spent a significant amount of time at their CES press conference, highlighting all the work they’d done to get ASIL-D (Automotive Safety Integrity Level D, which is the highest available) certification on their new Nvidia Drive architecture.

While the topic can be complex, Huang did an excellent job explaining the effort required to get their new Nvidia Xavier platform—which integrates Blackberry’s QNX-64 software platform in conjunction with their latest silicon—to be ISO 26262-compliant and reach ASIL-D compliance. He enthusiastically talked about the specific challenges necessary to make it happen, but proudly claimed it to be the first autonomous driving platform to reach that level of functional safety.

As impressive as that development is, it also made me think about the need to apply functional safety-type standards to the tech industry overall. While using tech devices doesn’t typically involve the kinds of life-and-death situations that driving or riding in a car can, it’s no longer an exaggeration to say that tech devices have a profoundly important impact on our lives. Given that importance, doesn’t it make sense to start thinking about the need for tech products that have the same level of reliability and redundancy as cars?

As recent natural disasters of all types have clearly illustrated, our overall dependence on technology has become pervasive. In addition, the recent Meltdown and Spectre chip flaws have shown a rather harsh light on both how dependent, and yet, how illusory, our dependence on technology is. While strong efforts are being made through an impressive collaboration of tech industry vendors to address these flaws, the fact that a technology (speculative execution) that’s been a key part of virtually every major processor that’s been produced by every major chip manufacturer over the last two decades is just now being exploited, clearly highlights how vulnerable our technology dependence has become.

Though there are no easy answers to these big picture challenges, it’s clear that we need to gain a fresh and very different perspective on technology products, our relationship to them, and our reliance on them. It’s also clear that the tech industry could actually learn from some old-school industries—like automotive—and start to apply some of their hard-won lessons into both component and finished product designs. The concept of functional safety may not be a perfect analogy for the tech business, but there’s no question that it’s time to start thinking differently about how tech products are designed, how we use them, and what we should expect from them.

Podcast: CES 2018

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell discussing the major themes from this year’s CES show, including the growth in robotics, the influence of AI and automation, the declining presence of Apple, the rise in voice-based interfaces, and more.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Will AI Power Too Many Smart Home Devices?

To the surprise of virtually no one, the overriding theme at this year’s CES appears to be artificial intelligence. At press conference after press conference on the media days before the show’s official start, vendor after vendor extolled the virtues of AI, though each of them offered a little bit of their own twist. At Samsung’s preview event on Sunday night, the company talked about using AI to do video upscaling on some of their newest TVs. Later that night, Nvidia CEO Jen-Hsun Huang spent a good amount of time describing the efforts the chip company had spent on developing accelerator chips optimized for both training and inferencing for deep neural networks being used in AI applications.

Monday morning, LG announced their own AI brand—ThinQ—which will be used to delineate all the new products they have which utilize the technology. Monday afternoon, Qualcomm talked about bringing AI to a variety of new applications and platforms, from hearables and other audio-focused products, to automotive applications and beyond. At the Sony press conference, the upgraded Aibo dog—a name now recognized to be a combination of AI and robot—charmed the crowd with its capabilities. Finally, on Monday evening, Intel CEO Bryan Krzanich described a world where AI can be used for everything from space exploration, through content creation, and onto autonomous cars.

In addition to AI, we saw a large number of announcements related to smart home, connected devices, and personal IoT. In most cases, the two concepts were tied together, with the connected home devices being made “smart” by AI technologies, as Samsung displayed at their primary press conference event on Monday.

All told it was an impressive display of both how far AI has come, and how many different ways that the technology could be applied. At the same time, it raised a few potentially disturbing questions.

Most notably, it seems clear that we’re all inevitably going to end up having quite a few AI-enabled devices within our homes. While that’s great on one hand, there’s no clear way to share that intelligence and capability across devices, particularly if they’re made by different companies. The challenge is that just as few ever buy complete home AV stacks from a single vendor for their home theater systems, and few people only buy compute devices from a single vendor running related operating systems, so too is it highly unlikely that we’re going to buy all our AI-enabled smart devices from a single vendor. In other words, we’re likely going to end up having a variety of different products from different vendors, with a high probability that they won’t all seamlessly connect and share information with one another.

In the case of basic connectivity, a number of those issues will likely be overcome, thanks to advancements in connectivity standards, as well as the abundance of gateway products that can bridge across different standards and protocols. What can’t easily be solved, however, is the sharing of AI-enabled personalization across all these smart devices. The result is that several different types of devices will be collecting data about how we interact them, what our habits and preferences are, etc. Not only does that mean a lot of the efforts will be redundant, but concerns about being personally tracked or monitored feel (and are) a lot worse when multiple companies are going to end up doing it simultaneously within our own homes.

Down the road, there may be an opportunity to create standards for sharing personalization information and other types of AI-generated data from our smart connected devices to avoid some of these issues. In the meantime, however, there are some very legitimate Orwellian-type concerns that need to be considered as companies blindly (and redundantly) follow their own approaches for collecting the kind of information they need to make their products more personal and more effective.

Podcast: Meltdown And Spectre Chip Issues, 2018 Predictions, CES Preview

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the Spectre and Meltdown chip issues, discussing predictions for the tech industry in 2018, and previewing the upcoming CES trade show.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Top Tech Predictions for 2018

After a surprisingly robust 2017, our collective attentions now turn to the new year, and many questions arise as to what will happen in 2018. Some developments will likely be obvious extensions to themes or ideas that started to take hold over the past 12 months, but there are always a few surprises as well. Exactly what those unexpected developments prove to be is virtually impossible to foretell, but here’s a reasoned look at some key themes I expect to drive important advances in the tech industry in 2018.

Prediction 1: AI Will Start to Enable Empathetic, Intelligent Computing

As powerful and advanced as the tech products we use everyday may be, it’s still a stretch to call them “intelligent.” Sure, they can perform impressive tasks, but in many ways, they still can’t do the simplest of things, particularly in helping us do what we “meant to do” or thought to do, as opposed to exactly what we told them to do.

The problem boils down to a question of interpretation, or the lack thereof. Eventually, we’ll all have devices and services that do what it is we’re trying to do in the most intelligent, most efficient way possible. The reality of those completely intelligent devices is still a long way off, of course, but in 2018, I think we’ll start to see some of the first significant glimpses of what is yet to come, through key enhancements in the various personal assistant platforms (Alexa, Google Assistant, Siri, etc.).

Specifically, I believe practical advancements in artificial intelligence will start to enable a more contextual form of computing with some of our devices, particularly smartphones and smart speakers. Going beyond the simple question and single, discrete response that typically marks these interactions now, we should start to see more human-like responses to our queries and requests. Multi-part conversations, more comprehensive answers, as well as appropriate and even insightful suggestions based on what it is we’re doing (or trying to do), will start to give us the sense that our devices are getting smart.

Ironically, part of the way this development will likely occur is by learning more about people and how they think—essentially building a form of digital empathy. By embedding more of the typical reactions that people have to certain questions, or the awareness of what they typically do in certain situations into AI models, we should start to see our digital devices become just a bit more human. In fact, if the technology advances as I expect, we’ll be able to stop calling AI “artificial intelligence” and start calling it “actual intelligence.”

Prediction 2: For Semiconductors, 2018 Will be the Year of the Accelerator

After decades of domination by CPUs, the world of semiconductors is witnessing a dramatic transformation in which other types of chip architectures are starting to drive a major shift in power, importance and influence. The process started several years back with the rise of GPUs in applications beyond just their traditional role of computer graphics, most notably machine learning and artificial intelligence. Since then, we’ve seen many other important types of chip architectures come to prominence, from old-line concepts like FPGAs (field-programmable gate arrays) to new ones like dedicated vision processing units (VPUs), AI-focused tensor processing units (TPUs), and much more. Collectively, these alternative chip architectures are often called “accelerators” because they speed up specific tasks.

In 2018, we’re likely to see several additional new chip architecture announcements and, more importantly, much wider use of computing accelerators across a variety of devices and applications. From data centers to consumer gadgets, the next year will see a tremendous amount of attention focused on these newer types of more specialized semiconductor chips. To be clear, it’s not that CPUs and other traditional computing elements will be going away, but a sizable portion of software development, R&D, and overall focus in 2018 is going to be directed towards these game-changing new components.

Part of the reason accelerators are so important is because they’re more specifically targeted tools. While CPUs are still great general-purpose computing engines, these computing accelerators are designed to do certain tasks faster and more efficiently. Realistically, none of the individual accelerators will have the same level of influence that CPUs have had, but collectively, their impact in the new year will be strongly felt.

The implications of the shift away from general purpose semiconductors to more tightly focused chips are huge. Not only will this development drive changes in terms of how products are designed, and what types of products are built, but it will also likely shift the role and influence that designers and builders of these chips have on the overall tech industry supply chain. Plus, though it may not be easy to predict exactly what new technology-based products will surprise us this year, it’s not hard to say that there’s a very good chance it’s going to be powered by one of these new computing accelerators.

Prediction 3: AR/VR Focus Shifts to Part Time Use

As many predicted, 2017 proved to be a very eventful year for augmented and virtual reality. Last year saw the introductions of Apple’s ARKit and Google’s ARCore smartphone augmented reality platforms, the release of Windows 10 Mixed Reality headsets, the surprising unveiling of Lenovo’s Mirage headset as part of their Star Wars Jedi Challenges collaboration with Disney, the announcement of a low-cost headset from Facebook-owned Oculus, and even the debut of Magic Leap’s mixed reality headset.

With all these pieces in place, it would seem that AR and VR are primed to have a very strong 2018 and yet, I believe the opposite will be the case. While this year will certainly enjoy its share of additional announcements and developments for AR and VR products, the general level of enthusiasm and excitement around the category has arguably faded—we’ve entered the trough of disillusionment.

As a result, I believe 2018 will likely be a year of reflection and refocusing for AR and VR, with particular attention being paid to usage models that are different than what many had originally predicted. Part of the problem is that people have had unrealistic expectations about how often and how long people want to engage in AR or VR experiences. Initial hype around the categories suggested that they could provide a new computing paradigm that would essentially replace our existing all-day devices. The reality, however, is far different. Between the obvious challenges of having to hold your AR-enabled smartphone out in front of you to use an AR app, as well as the physical discomfort of wearing large, bulky headsets, most AR and VR product experiences are measured in small numbers of minutes.

Now, there isn’t necessarily wrong with that scenario, if that’s what you were led to believe was going to be the case, and if you bought a product that was priced and positioned as an occasional use peripheral. At the moment, however, many AR and VR products are as expensive as primary computing products, and many AR and VR apps presume people will use them for significant amounts of time. That’s why this year, I expect to see a lot of effort being spent to develop AR and VR products and experiences that better match their realistic part-time usage levels.

Prediction 4: Edge Computing Transforms the Cloud and Subsumes IoT

For the last few years, it seems, everyone and their sister has been waxing rhapsodically about the virtues and opportunities offered by the Internet of Things (IoT). Connecting all kinds of devices together using simple internet protocol (IP) connectivity standards was supposed to enable smart cities, smart homes, smart factories, heck, pretty much smart everything. And yet, here we stand at the dawn of 2018 and—with a few notable exceptions—very few of those things have really come to pass.

In retrospect, it’s not really hard to see why. Just connecting things together doesn’t necessarily provide any real value, and even tapping into the new data sources enabled by IoT hasn’t proven to be as fruitful as many had hoped.

The IoT exercise hasn’t all been for naught, however, because it opened many people’s eyes to new types of computing models. In particular, the notion of a fully distributed computing environment, where different types of devices with widely varying levels of computing power can be both connected and organized together in a way that allows them to achieve certain tasks is arguably an evolution of the original IoT concept. Instead of focusing on just the connection of “the things,” however, this new model focuses on the combined, distributed compute of “the things.” What this does is enables different groupings of devices or infrastructure components to work together in more efficient and more effective manner. It’s essentially applying the principles of distributed internet connectivity to computing resources, and in the process, transforming how we think about centralized cloud computing models.

One of the key ingredients in this new, evolved version of the cloud is computing components that sit out at the end of the network, closest to where people who are trying to achieve certain tasks are located. These edge computing elements—which can range from consumer endpoints to industrial gateways to network routers—become a critical part of this new computing fabric, in part, because they pull some of the work that used to be done in centralized cloud computing resources and do that work on the edge. In some instances it’s to avoid latency, in others it’s to take advantage of unique capabilities found within edge devices, but regardless, they shift the balance of computing power away from the cloud at the center of everything and towards a highly distributed, edge-driven model. We saw the beginnings of this trend in 2017, but in 2018 edge computing will start to deliver on the promises originally made by IoT and drive not just a more connected, but a more intelligent tech device world.

Prediction 5: 5G Drives New Types of Competitive Connectivity

With the recent formalization of the 5G NR (new radio) specification (part of 3GPP Release 15 to be exact), the telecom industry has entered into an exciting phase. 2018 will not, however, see the official unveiling nor the commercial availability of 5G—you’ll have to wait until at least 2019 for that. The new year will start to bring us some critical connectivity enhancements, though, many of which will be inspired by 5G.

While most people are focused on the expected speed enhancements to be offered by 5G, many of the more exciting ideas associated with the forthcoming broadband wireless network standard have to do with the range of connectivity options becoming available. In particular, the ability to have a consistent connectivity experience, regardless of your location, the number of people/devices around you, and other mitigating factors that commonly limit the speed or quality of our connections, is a key goal for 5G. In addition, there’s a growing need for extremely low power connectivity options for some types of devices that live out at the network edge.

Thankfully, many of these same principles and some of the key underlying technologies necessary for 5G will be showing up in other places in 2018, with the net result being that we’ll start to enjoy some 5G-like benefits this calendar year.

The latest WiFi standards (802.11ax and 802.11ay), for example, are expected to make a formal appearance in 2018. They will leverage both the kind of spectrum efficiency and signal modulation enhancements that 5G will use to fit more connections/data into a given amount of radio “space,” as well as using millimeter wave radio technologies (in the case of 802.11ay) for transmitting data at faster rates (though over shorter distances).

This year should also see the wider appearance of several technologies that combines elements of LTE and WiFi, including LAA (Licensed Assisted Access), LWA (LTE-WLan Aggregation), and MulteFire, all of which offer improvements in wireless service by using some of the unlicensed radio spectrum currently used by WiFi to deliver wireless broadband 4G LTE signals.

In addition, with the proposed purchase of Sigma Designs by Silicon Labs in 2018, we should see the combination of Zigbee and ZWave—two ultra-low-power connectivity options primarily used in smart home applications—in future iterations of the combined company’s products.

Ironically, many of these newly enhanced connectivity options arguably inspired at least somewhat by 5G may end up competing with it at some point past 2018, but for the next year, we should be able to enjoy the enhanced connection benefits.

Prediction 6: Tech Business Models Fall Under Increasing Scrutiny

After years of blissful detachment, the tech industry faced an unprecedented amount of pushback and scorn in 2017, as major tech companies like Google, Facebook, Uber and more found themselves in challenging political, judiciary, and legislative environments. In 2018, the pressure that tech companies face will likely increase, and I expect that scrutiny will start to extend to business models as well.

The tech industry continues to perform extraordinarily well from an economic and business outcome perspective, and it’s generally been seen as a safe haven for investment. However, it’s also been granted extraordinary leeway when it comes to valuations for companies that—were they in any other industry—would be considered to have pretty questionable business models (from a potential profitability perspective, that is).

Tech-focused venture capitalists, hedge fund managers, and other financiers have managed to create and nurture an environment where businesses built on pretty flaky ideas continue to survive and even support one another. At some point, this house of cards is likely to fall and with increasing amounts of unwanted attention being focused on the tech industry, 2018 could be the year that it does.

In particular, I think we’ll see questions around “sharing economy”-based companies (such as Uber, etc.), which enjoy enormous valuations despite not having made any profits, and are unsafely dependent on regulations that could easily change. In addition, despite the current hype, cryptocurrencies like Bitcoin—which represent the penultimate example of a technology valuation based on essentially nothing but an interesting idea—are likely to face serious challenges in 2018 as well.

Prediction 7: Voice Computing Cacophony

As intriguing, popular and powerful as voice-based computing devices like smart speakers are proving to be, I expect that natural language processing (NLP)-driven interactions are going to start facing some serious challenges in 2018. Why, you ask? In part, I think they could end up being a victim of their own success.

Because of the popularity of voice-based personal assistants, we’re starting to see the technology embedded across a wide range of devices, from lamps, to TVs, to cars and beyond. In addition, after initial experiments with a single unit, many people have started putting smart speakers all over their homes. The practical net result is that sometime in 2018, a large percentage of personal assistant users will have regular access to multiple assistants simultaneously—often across multiple platforms. Combine that with the fact that in 2018 we’ll likely start seeing vendors enabling people to customize the trigger word for these various assistants to start listening and, well, it’s a quick recipe for disaster.

Imagine what will happen, for example, if, for consistency’s sake, you change them all to respond to “computer” and then you ask a question or make a request to the “computer.” Yikes! (Or, at least, an interesting exercise.) As it is, I’ve started to see a few people talk about how they’ve heard their Google Assistant automatically respond to their Amazon Echo (or vice versa). Throw in your own queries and the chances for vocal computing mayhem will likely be very high.

Tech vendors can obviously start to plan for a few of these potential outcomes and consumers can try to avoid them, but this isn’t the only challenge associated with the realities of multiple competing assistants in a single household. What if you actually want to take advantage of the different assistants to do different tasks because you find one is better at responding to one kind of query than another? In essence, you’ll have to try to learn to speak the unique “language” of each assistant and direct the appropriate query to the appropriate assistant. We may see third-party vendors try to create a “meta-assistant” that doles out different questions to different assistants and becomes the single point of interaction, but the likelihood of competing tech vendors enabling this type of capability seems low.

Don’t get me wrong, I’m certainly excited about the potential that voice-based interactions can bring to speakers and all our tech devices, but I’m concerned we could hit some serious roadblocks in 2018.

Podcast: Magic Leap, Apple Battery, MacOS-iOS Combination

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing the new Magic Leap mixed reality goggles, analyzing the Apple iPhone battery performance controversy, and debating the potential impact of a future combination of MacOS and iOS.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Tech’s Biggest Challenge: Fulfilling the Vision

The last several years have seen a tremendous expansion in the ideas, concepts, and overall vision of where the world of technology is headed. From mind-bending escapades into virtual and augmented reality, on to thought-provoking voice interactions with AI-powered digital assistants, towards an Internet filled with billions of connected things, into enticing experiments with autonomous vehicles, and up through soaring vistas enabled by drones, the tech world has been far from lacking in big picture perspectives on where things can go.

Vision, however, isn’t the hard part. The really challenging task, which the industry is just starting to face, is actually executing on those grandiose ideas. It’s all fine and good to talk about where things are going to go—and building out grand blueprints for the future is a critical step for setting industry direction—but it’s becoming clear that now is the time for true action.

Excitement around these big picture visions has begun to fade, replaced increasingly by skepticism of their feasibility, particularly when early efforts in many of these areas have failed to meet the kind of mass success that many had predicted. People have heard enough about what we could do, and are eager to see what we can do.

It’s also more than just a simple dip in the infamous Gartner hype cycle, which describes a path that many new technologies face as they enter the market. According to that widely cited predictive tool, initial excitement around a new technology grows, eventually reaching the point where hype overtakes reality. After that, the technology falls into the trough of disillusionment, as people start to question its impact, before finally settling into a more mature, balanced perspective on its long-term value.

What’s happening in the tech industry now is a much bigger change. After years of stunning new ideas and concepts that hinted at a radically different tech future way beyond the relatively simple advances that were being made in our core tech devices, there’s an increasing recognition that it’s a very long road between where we are now, and where we need to be in order for those visions to be realized.

As a result, there’s a major resetting of expectations going on in the industry. It’s not that the ultimate goals have changed—we’re still headed towards truly immersive AR/VR, conversation-ready AI tools, fully autonomous cars, a seamlessly connected Internet of Things, and much more—but timelines are shifting for their full-fledged arrival.

In the meantime, the industry has to dig into the nitty-gritty of developing all the critical technologies and standards necessary to enable those game-changing developments. Unfortunately, much of that work is likely to be slow-going, and, in many instances, won’t necessarily translate into immediately obvious advances. It’s not that technological innovation will cease or even slow down, but I do believe many advances are going to be more subtle and much less obvious than what many have become accustomed to. As a result, some will think that major tech developments have started to slow.

Take, for example, the world of Artificial Intelligence. By all accounts, refinements in AI algorithms continue at a frenetic pace, but how those get translated into real-world uses and practical implementations isn’t at all clear and, therefore, isn’t moving nearly as quickly. Part of the reason is that the difference between, say, today’s digital assistants and future versions that are contextually intelligent are likely to occur along a long, mildly-sloped line that will be challenging for many people to remember. The difference between a current assistant that can only respond to a relatively simple query and a future version that will be able to engage in intelligent, multi-part conversations is certainly going to be noticeable, but there will likely be lots of subtle, difficult-to-distinguish changes along the way. Plus, it seems a lot less dramatic than the first few times you spoke to a smart speaker and it actually responded back.

If we take a step back and look at the larger global arch of history that the tech industry currently finds itself in, I’d argue we’re in a transitional period. After decades of evolution centered around PCs, smartphones, and simple web browsing, we entered an epoch of intelligent machines, seamless connectivity, and web-based services several years back that allowed the industry to dream big about what it could achieve. Now that we understand those visions, however, the industry needs to get to the hard work of truly bringing those visions to life.

Podcast: Net Neutrality, Disney-Fox, Apple-Shazam, Microsoft AI

This week’s Tech.pinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell discussing the Net Neutrality decision, Disney’s purchase of 20th Century Fox, Apple’s purchase of Shazam, and Microsoft’s new AI-related announcements.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Dawn of Gigabit Connectivity

From cars to computers to connectivity, speed is an attractive quality to many people. I mean, who can’t appreciate devices or services that help you get things done more quickly?

While raw semiconductor chip performance has typically been—and still is—a critical enabler of fast tech devices, in many instances, it’s actually the speed of connectivity that determines their overall performance. This is especially true with the ongoing transition to cloud-based services.

The problem is, measuring connectivity speed isn’t a straightforward process. Sure, you can look for connectivity-related specs for your devices, or run online speed tests (like Speedtest.net), but very few really understand the former and, as anyone who has tried the latter knows, the results can vary widely, even throughout the course of a single day.

The simple truth is, for a lot of people, connectivity is black magic. Sure, most people have heard about different generations of cellular technology, such as 4G or the forthcoming 5G, and many even have some inkling of different WiFi standards (802.11n, 11ac,11ad, etc.). Understanding how or why your device feels fast doing a particular online task one day and on other days it doesn’t, however, well, that’s still a mystery.

Part of the reason for this confusion is that the underlying technology (and the terminology associated with it) is very complex. Wireless connectivity is a fundamentally difficult task that involves not only complex digital efforts from very sophisticated silicon components, but a layer of analog circuitry that’s tied to antennas and physical waveforms, as well as interactions with objects in the real world. Frankly, it’s amazing that it all works as well as it does.

Ironically, despite its complexity, connectivity is also something that we’ve started to take for granted, particularly in more advanced environments like the US and Western Europe. Instead of being grateful for having the kinds of speedy connections that are available to us, we’re annoyed when fast, reliable connectivity isn’t there.

As the result of all these factors, connectivity has been relegated to second-class status by many, overshadowed by talk of CPUs, GPUs, and other types of new semiconductor chip architectures. Modems, however, were arguably one of the first specialty accelerator chips, and play a more significant role than many realize. Similarly, WiFi controller chips offer significant connectivity benefits, but are typically seen as basic table stakes—not something upon which critical product distinctions or buying decisions are made.

People are starting to finally figure out how important connectivity is when it comes to their devices, however, and that’s starting to drive a different perspective around communications-focused components. One of the key driving factors for this is the evolution of wireless connectivity to speeds above 1 gigabit per second (1 Gbps). Just as the transition to 1 GHz processors was a key milestone in the evolution of CPUs, so too has the appearance of 1 Gbps wireless connectivity options enabled a new perspective on communications components such as modems and WiFi controllers.

Chipmaker Qualcomm was one of the first to talk about both Gigabit LTE for cellular broadband modems, as well as greater than 1 Gbps speeds for 802.11ac (in the 5 GHz band) and 802.11ad (in the distance-constrained 60 GHz band). Earlier this year, Qualcomm demonstrated Gigabit LTE in Australia with local Aussie carrier Telstra, and just last month, they showed offer similar technology here in the US with T-Mobile. In both cases, they were using a combination of Snapdragon 835-equipped phones—such as Samsung’s S8—which feature a Category 16 (Cat16) modem, and upgraded cellular telecom equipment from telecom equipment providers, such as Ericsson. The company also just unveiled their new Snapdragon 845 chip, expected to ship in smartphones in later 2018, that offers an even faster Cat18 modem, with maximum download speed of 1.2 Gbps.

In the case of both faster LTE and faster WiFi, communications component vendors like Qualcomm have to deploy a variety of sophisticated technologies, such as MU-MIMO (multi-user, multiple input, multiple output) transmission and antenna technologies, and 256 QAM data modulation (e.g., compression) schemes, among others.

The net result is extremely fast connection speeds that can (and likely will) have a dramatic impact on the types of cloud-based services that can be made available, as well as our quality of experience with them. There’s no denying that the technology behind these speedy connections is complicated, but with the dawn of the gigabit connectivity era, it’s time to at least acknowledge the impressive benefits these speedy connections provide.