Top Tech Predictions for 2019

Though it’s a year shy of the big decade marker, 2019 looks to be one of the most exciting and most important years for the tech industry in some time. Thanks to the upcoming launch of some critical new technologies, including 5G and foldable displays, as well as critical enhancements in on-device AI, personal robotics, and other exciting areas, there’s a palpable sense of expectation for the new year that we haven’t felt for a while.

Plus, 2018 ended up being a pretty tough year for several big tech companies, so there are also a lot of folks who want to shake the old year off and dive headfirst into an exciting future. With that spirit in mind, here’s my take on some of what I expect to be the biggest trends and most important developments in 2019.

Prediction 1: Foldable Phones Will Outsell 5G Phones
At this point, everyone knows that 2019 will see the “official” debut of two very exciting technological developments in the mobile world: foldable displays and smartphones equipped with 5G modems. Several vendors and carriers have already announced these devices, so now it’s just a question of when and how many.
Not everyone realizes, however, that the two technologies won’t necessarily come hand-in-hand this year: we will see 5G-enabled phones and we will see smartphones with foldable displays. As of yet, it’s not clear that we’ll see devices that incorporate both capabilities in calendar year 2019. Eventually, of course, we will, but the challenges in bringing each of these cutting-edge technologies to the mass market suggest that some devices will include one or the other. (To be clear, however, the vast majority of smartphones sold in 2019 will have neither an integrated 5G modem nor a foldable display—high prices for both technologies will limit their impact this year.)

In the near-term, I’m predicting that foldable display-based phones will be the winner over 5G-equipped phones, because the impact that these bendable screens will have on device usability and form factor are so compelling that I believe consumers will be willing to forgo the potential 5G speed boost. Plus, given concerns about pricing for 5G data plans, limited initial 5G coverage, and the confusing (and, frankly, misleading) claims being made by some US carriers about their “versions” of 5G, I believe consumers will limit their adoption of 5G until more of these issues become clear. Foldable phones on the other hand—while likely to be expensive—will offer a very clear value benefit that I believe consumers will find even more compelling.

Prediction 2: Game Streaming Services Go Mainstream
In a year when there’s going to be a great deal of attention placed on new entrants to the video streaming market (Apple, Disney, Time Warner, etc.), the surprise breakout winner in cloud-based entertainment in 2019 could actually be game streaming services, such as Microsoft’s Project xCloud (based on its Xbox gaming platform) and other possible entrants. The idea with game streaming is to enable people to play top-tier games across a wide range of both older and newer PCs, smartphones, and other devices. Given the tremendous growth in PC and mobile gaming, along with the rise in popularity of eSports, the consumer market is primed for a service (or two) that would allow gamers to play popular high-quality gaming titles across a wide range of different device types and platforms.

Of course, game streaming isn’t a new concept, and there have been several failed attempts in the past. The challenge is delivering a timely, engaging experience in the often-unpredictable world of cloud-driven connectivity. It’s an extraordinarily difficult technical task that requires lag-free responsiveness and high-quality visuals packaged together in an easy-to-use service that consumers would be willing to pay for.

Thankfully, a number of important technological advancements are coming together to make this now possible, including improvements in overall connectivity via WiFi (such as with WiFi6) and wide area cellular networks (and 5G should improve things even more). In addition, there’s been widespread adoption and optimization of GPUs in cloud-based servers. Most importantly, however, are software advancements that can enable technologies like split or collaborative rendering (where some work is done on the cloud and some on the local device), as well as AI-based predictions of actions that need to be taken or content that needs to be preloaded. Collectively, these and other related technologies seem poised to enable a compelling set of gaming services that could drive impressive levels of revenue for the companies that can successfully deploy them.

It’s also important to add that although strong growth in game streaming services that are less hardware dependent may imply a negative impact on gaming-specific PCs, GPUs and other game-focused hardware (because people would be able to use older, less powerful devices to run modern games); in fact, the opposite is likely to be true. Game streaming services will likely expose an even wider audience to the most compelling games and that, in turn, will likely inspire more people to purchase gaming-optimized PCs, smartphones, and other devices. The gaming service will give them the opportunity to play (or continue playing) those games in situations or locations where they don’t have access to their primary gaming devices.

Prediction 3: Multi-Cloud Becomes the Standard in Enterprise Computing
The early days of cloud computing in the enterprise featured prediction after prediction of a winner between public cloud vs. private cloud and even of specific cloud platforms within those environments. As we enter 2019, it’s becoming abundantly clear that all those arguments were wrong headed and that, in fact, everyone won and everyone lost at the same time. After all, which of those early prognosticators would have ever guessed that in 2018, Amazon would offer a version of Amazon Web Services (called AWS Outpost) that a company could run on Amazon-branded hardware in the company’s own data center/private cloud?

It turns out that, as with many modern technology developments, there’s no single cloud computing solution that works for everybody. Public, private, and hybrid combinations all have their place, and within each of those groups, different platform options all have a role. Yes, Amazon currently leads overall cloud computing, but depending on the type of workload or other requirements, Microsoft’s Azure, Google’s GCP (Google Cloud Platform), or IBM, Oracle, or SAP cloud offerings might all make sense.

The real winner is the cloud computing model, regardless of where or by whom it’s being hosted. Not only has cloud computing changed expectations about performance, reliability, and security, the DevOps software development environment it inspired and the container-focused application architecture it enabled have radically reshaped how software is written, updated, and deployed. That’s why you see companies shifting their focus away from the public infrastructure-based aspects of cloud computing and towards the flexible software environments it enables. This, in turn, is why companies have recognized that leveraging multiple cloud types and cloud vendors isn’t a weakness or disjointed strategy, but actually a strength that can be leveraged for future endeavors. With cloud platform vendors expected to work towards more interoperability (and transportability) of workloads across different platforms in 2019, it’s very clear that the multi-cloud world is here to stay.

Prediction 4: On-Device AI Will Start to Shift the Conversation About Data Privacy
One of the least understood aspects of using tech-based devices, mobile applications, and other cloud-based services is how much of our private, personal data is being shared in the process—often without our even knowing it. Over the past year, however, we’ve all started to become painfully aware of how big (and far-reaching) the problem of data privacy is. As a result, there’s been an enormous spotlight placed on data handling practices employed by tech companies.

At the same time, expectations about technology’s ability to personalize these apps and services to meet our specific interests, location, and context have also continued to grow. People want and expect technology to be “smarter” about them, because it makes the process of using these devices and services faster, more efficient, and more compelling.

The dilemma, of course, is that to enable this customization requires the use of and access to some level of personal data, usage patterns, etc. Up until now, that has typically meant that most any action you take or information you share has been uploaded to some type of cloud-based service, compiled and compared to data from other people, and then used to generate some kind of response that’s sent back down to you. In theory, this gives you the kind of customized and personalized experience you want, but at the cost of your data being shared with a whole host of different companies.

Starting in 2019, more of the data analysis work could start being done directly on devices, without the need to share all of it externally, thanks to the AI-based software and hardware capabilities becoming available on our personal devices. Specifically, the idea of doing on-device AI inferencing (and even some basic on-device training) is now becoming a practical reality thanks to work by semiconductor-related companies like Qualcomm, Arm, Intel, Apple, and many others.

What this means is that—if app and cloud service providers enable it (and that’s a big if)—you could start getting the same level of customization and personalization you’ve become accustomed to, but without having to share your data with the cloud. Of course, it isn’t likely that everyone on the web is going to start doing this all at once (if they do it at all), so inevitably some of your data will still be shared. However, if some of the biggest software and cloud service providers (think Facebook, Google, Twitter, Yelp, etc.) started to enable this, it could start to meaningfully address the legitimate data privacy concerns that have been raised over the last year or so.

Apple, to its credit, started talking about this concept several years back (remember differential privacy?) and already stores things like facial recognition scans and other personally identifiable information only on individuals’ devices. Over the next year, I expect to see many more hardware and component makers take this to the next level by talking not just about their on-device data security features, but also about how onboard AI can enhance privacy. Let’s hope that more software and cloud-service providers enable it as well.

Prediction 5: Tech Industry Regulation in the US Becomes Real
Regardless of whether major social media firms and tech companies enable these onboard AI capabilities or not, it’s clear to me that we’ve reached a point in the US social consciousness that tech companies managing all this personal data need to be regulated. While I’ll be the first to admit that the slow-moving government regulatory process is ill-matched to the rapidly evolving tech industry, that’s still not an excuse for not doing anything. As a result, in 2019, I believe the first government regulations of the tech industry will be put into place, specifically around data privacy and disclosure rules.

It’s clear from the backlash that companies like Facebook have been receiving that many consumers are very concerned with how much data has been collected not only about their online activities, but their location, and many other very specific (and very private) aspects of their lives. Despite the companies’ claims that we gave over most all of this information willingly (thanks to the confusingly worded and never read license agreements), common sense tells us that the vast majority of us did not understand or know how the data was being analyzed and used. Legislators from both parties recognize these concerns, and despite the highly polarized political climate, are likely going to easily agree to some kind of limitations on the type of data that’s collected, how it’s analyzed, and how it’s ultimately used.

Whether the US builds on Europe’s GDPR regulations, the privacy laws instated in California last year, or something entirely different remains to be seen, but now that the value and potential impact of personal data has been made clear, there’s no doubt we will see laws that control the valued commodity that it is.

Prediction 6: Personal Robotics Will Become an Important New Category
The idea of a “sociable” robot—one that people can have relatively natural interactions with—has been the lore of science fiction for decades. From Lost in Space to Star Wars to WallE and beyond, interactive robotic machines have been the stuff of our creative imagination for some time. In 2019, however, I believe we will start to see more practical implementations of personal robotics devices from a number of major tech vendors.

Amazon, for example, is widely rumored to be working on some type of personal assistant-based robot leveraging their Alexa voice-based digital assistant technology. Exactly what form and what sort of capabilities the device might take are unclear, but some type of mobile (as in, able to move, not small and lightweight!) visual smart display that also offers mechanical capabilities (lifting, carrying, sweeping, etc.) might make sense.

While a number of companies have tried and failed to bring personal robotics to the mainstream in the recent past, I believe a number of technologies and concepts are coming together to make the potential more viable this year. First, from a purely mechanical perspective, the scarily realistic capabilities now exhibited by companies like Boston Dynamics show how far the movement, motion, and environmental awareness capabilities have advanced in the robotics world. In addition, the increasingly conversational and empathetic AI capabilities now being brought to voice-based digital assistants, such as Alexa and Google Assistant, demonstrate how our exchanges with machines are becoming more natural. Finally, the appeal of products like Sony’s updated Aibo robotic dog also highlight the willingness that people are starting to show towards interacting with machines in new ways.

In addition, robotics-focused hardware and software development platforms, like Nvidia’s latest Jetson AGX Xavier board and Isaac software development kit, key advances in computer vision, as well as the growing ecosystem around the open source ROS (Robot Operating System) all underscore the growing body of work being done to enable both commercial and consumer applications of robots in 2019.

Prediction 7: Cloud-Based Services Will Make Operating Systems Irrelevant
People have been incorrectly predicting the death of operating systems and unique platforms for years (including me back in December of 2015), but this time it’s really (probably!) going to happen. All kidding aside, it’s becoming increasingly clear as we enter 2019 that cloud-based services are rendering the value of proprietary platforms much less relevant for our day-to-day use. Sure, the initial interface of a device and the means for getting access to applications and data are dependent on the unique vagaries of each tech vendor’s platform, but the real work (or real play) of what we do on our devices is becoming increasingly separated from the artificial world of operating system user interfaces.

In both the commercial and consumer realms, it’s now much easier to get access to what it is we want to do, regardless of the underlying platform. On the commercial side, the increasing power of desktop and application virtualization tools from the likes of Citrix and VMWare, as well as moves like Microsoft’s delivering Windows desktops from the cloud all demonstrate how much simpler it is to run critical business applications on virtually any device. Plus, the growth of private (on-premise), hybrid, and public cloud environments is driving the creation of platform-independent applications that rely on nothing more than a browser to function. Toss in Microsoft’s decision to leverage the open-source Chromium browser rendering engine for its next version of its Edge browser, and it’s clear we’re rapidly moving to a world in which the cloud finally and truly is the platform.

On the consumer side, the rapid growth of platform-independent streaming services is also promoting the disappearance (or at least sublimation) of proprietary operating systems. From Netflix to Spotify to even the game streaming services mentioned in Prediction 2, successful cloud-based services are building most all of their capabilities and intelligence into the cloud and relying less and less on OS-specific apps. In fact, it will be very interesting to see how open and platform agnostic Apple makes its new video streaming service. If they make it too focused on Apple OS-based device users only, they risk having a very small impact (even with their large and well-heeled installed base), particularly given the strength of the competition.

Crossover work and consumer products like Office 365 are also shedding any meaningful ties to specific operating systems and instead are focused on delivering a consistent experience across different operating systems, screen sizes, and device types.

The concept of abstraction goes well beyond the OS level. New software being developed to leverage the wide range of different AI-specific accelerators from vendors like Qualcomm, Intel, and Arm (AI cores in their case) is being written at a high-enough level to allow them to work across a very heterogeneous computing environment. While this might have a modest impact on full performance potential, the flexibility and broad support that this approach enables is well worth it. In fact, it’s generally true that the more heterogeneous the computing environment grows, the less important operating systems and proprietary platforms become. In 2019, it’s going to be a very heterogenous computing world, hence my belief that the time for this prediction has finally come.

Podcast: 2018 Year in Review

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the big news developments impacting the tech industry this year, including social media and data privacy concerns, price hits to the previously soaring FAANG stocks, developments in assisted and autonomous cars, challenges to AR and VR products, changes in the smartphone and PC businesses, the reinvigoration of the semiconductor market, and the impact of artificial intelligence.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

News You Might Have Missed, Week of December 21st, 2018

London Gatwick Airport Shuts Down Due to Drone Activity

On Thursday the runway at Gatwick airport remained closed until 3 a.m. and then was shut down again 45 minutes later after “a further sighting of drones.” It was still closed as of Thursday evening, and police are hunting for the drones’ operator.

Via Cnet

  • London Gatwick is one of the busiest airports in Europe especially at this time of the year.
  • While many might think it is silly to think that a small drone could inflict any damage to a plane there is plenty of research that proves the opposite. While the probability of a collision is small, drones can be drawn into the turbine of the plane or could inflict serious damage to a cockpit windshield.
  • According to the UK Airprox Board, there were 92 instances of aircraft and drones coming close to colliding in 2017.
  • As prices continue to decrease and capabilities continue to increase, the threat that drones pose to an aircraft is high. Charter planes or helicopters that are smaller and tend to fly at lower altitudes are the biggest risks but of course, all planes during takeoff or landing are a target.
  • The risk is high because there is a certain degree of stupidity in this area with users not realizing the impact that a small device can have when gravity comes into play. Stupidity aside there is also a high risk of criminal activity where bad players might want to intentionally cause harm.
  • Retail drones have a geofence that does not allow them to fly within a couple of miles from an airport but apparently, this can easily be bypassed if you know what you are doing.
  • There is technology available to track, divert and disable rogue drones but these systems are being deployed very slowly in specific locations rather than systematically across countries.
  • Drones pose other security threats when you think about crowded events or even heavy traffic roads.
  • Like many other technologies that are being brought to market today, we are simply unprepared when it comes to security, safety, insurance and of course policies and regulations.
  • I do also wonder if today’s culture is very different from the past. Model airplanes are no different from drones but I do believe that today’s culture looking for the stunt, the video hits on social media and of course terror threats make the comparison between drones and model planes quite hard.

Google Home Holidays Ad

In the “Home Alone Again” ad Google brings back Macaulay Culkin in his role of Kevin — but this time, with a more modern, Google Assistant-powered setup.

Via The Verge

  • Google is going all in with Holidays ads they even have one for Google Duo where the Video Chat service is strategically positioned as a FaceTime that works across platforms and keeps families on Android and iOS together!
  • In this very cute Google Home ad, Culkin relies on Google to help with some of the famous bits from the movie, including the famous “Operation Kevin” that automates things like locking the door, moving around a cardboard cutout on a Roomba, and turning on the lights to protect against a Joe Pesci-like thief.
  • Why am I talking about an ad? Because making an emotional connection with consumers is as important as showing how a digital assistant can help.
  • There is still a lot of work that needs to be done to get consumers to push the boundaries of what they can ask their digital assistant to do as well as how many devices can be connected to and operated through a digital assistant.
  • In past ads, Google had been focused on what Google Assistant can do for you with the “Make Google Do it” with this add there is a more deliberate focus on the connected home.
  • There were many products shown in the ad but interestingly there were no Nest products despite a request to lower the temperature and one for showing the front door.
  • After bringing back Nest into the Devices groups it seems to me that their products are still not seen as part of the Made by Google line up and therefore mostly an afterthought. A very different approach from Amazon’s rapid assimilation of the Ring product line.
  • Some have also noticed that the phone used in the ad does not reflect any existing hardware which seems a little strange given how popular the Pixel Phones have been.
  • If the phone was meant to show a generic Android device then I would have expected to see third-party Google Assistant enabled devices but this was not the case as all the other devices were Made by Google devices.
  • Maybe we are all reading too much into this phone thinking about an unreleased model when the answer could be as simple as the fact that for a company that has focused on software for so long the world of hardware is still all a bit new.

Slack Bans Users Who Have Visited US-sanctioned Countries

On Thursday, some Slack users began to report receiving a message from the app notifying them they had been banned from the service because of their ties with one of the countries the US has an embargo with Iran, Cuba, North Korea, Syria, and the Crimea region or Ukraine.

Via Mashable 

  • The message clearly states that the ban from the services is due to complying with export control and economic sanctions laws.
  • However, many users who received the message and lost access to the service took to Twitter to say they are not in Iran nor do they have any ties with any of the countries listed.
  • Furthermore, users lamented the inability to appeal against the decision especially given that they received no warning this was going to happen.
  • It is clear to me that the current political climate is making companies nervous and when you are popular but well aware that you do not have the gravitas the big Internet giants have you pick safe over sorry.
  • According to a Slack representative the ban was implemented through geolocation which relies on IP addresses as Slack does not have access to nationality or ethnicity data of its users.
  • Considering how some of the users who were banned did not reside in any of the countries listed, one has to wonder how accurate that geolocation data was.
  • I suppose it is somewhat refreshing to see a company not adopting the “move fast break things” mantra. But, it does seem that Slack is either over concerned or not up to speed with all aspects of the law in this regard.
  • Since 2014, US sanctions have included a license for personal communication tools like chat and social media that are used to exchange personal communication. It would seem to me that Slack would fall into this category.
  • Some of the concerns Slack might have about making sure to be compliant might be linked to the fact that its service is not blocked in Iran like other messaging services are possibly making it a preferred service.
  • After ZTE and more recently Huawei, it is understandable why a US company might want to take a broad brush approach first and maybe review later.

What to Consider When Marketing PCs to Millennials

Millennials are so yesterday! Gen Z is who we are told we should worry about. True, we must understand our kids now to figure out how they will shape the world. In the short term, however, especially if you are trying to flog, I mean sell, something, it is Millennials you want to understand as they are the ones with purchasing power.

A couple of weeks ago I shared some data from a recent study we, at Creative Strategies, conducted in the US on workflows and the importance that some features native on smartphones are having on determining what users want to see on their PCs. While I was looking at specific sample cohorts, I found some fascinating data points that PC manufacturers should keep in mind when targeting Millennials.

Technology Adoption

Early tech adopters are often characterized by two core qualities: their tech-savviness and their high propensity to adopt technology early in the cycle. Many assume that, as most Millennials grew up with technology, they are by default, early adopters. Millennials (18 to 34 years old) in our panel certainly check the first box: 72% of them consider themselves pretty tech savvy with family and friends often turning to them for tech advice. When it comes to buying new gadgets though, only 50% said they tend to be the first person in their peer group to purchase.

Work and Play is a Blend even on a PC

I often talked about how Millennials seem to have given up on trying to find the work-life balance that we, members of Gen X, have been desperately tried to find. Instead, Millennials are working towards a blend of work and play that ultimately might deliver a balance or a better sense of being in control. When we asked our panelists how often they start and finish their work or student day at home only 10% of Millennials said never. This compares to 18% among Gen X.

The phone is the tool Millennials turn to so they can check emails, calendar, and social media before heading out in the morning (45%) and keep an eye on things in the evening (27%). What is interesting is that the PC that for many generations had represented the king of productivity and even more so since we have been blessed with smartphones, is becoming more a device that Millennials rely on for both productivity and entertainment. Twenty-three percent of our panelists in the 18-34 age bracket open their PC/Mac/Chromebook in the morning to check email, calendar and social media but 17% turn to the same devices in the evening to binge watch content or for gaming. This is almost double the number of our panelists in the 35-54 age group.

The Right Tool for the Job

Over the past couple of years, we have seen operating systems as well as apps trying to bridge the divide between phones and PCs. Apps allow users to do most of what they do on their phones on the PC and the other way around although maybe a little less naturally. Operating systems even allow users to pick up a phone call or answer a message from their PC/Mac so not to interrupt their workflow. How much Millennials embrace this cross-over compared to Gen X is quite interesting and could help PC vendors better understand what features to focus on both in their product design and their marketing.

Maybe the most telling data point that shows how differently Millennials think about their phone and their PC is that answering a phone call does not top the list of the tasks they turn to the phone for. Social media tops their roster and with a clear lead over Gen X. This can signal two things: one that social media for Millennials really started on a phone rather than a PC. Twitter, Instagram, WhatsApp are the children of the app stores. Facebook, on the other hand, had the chance to establish itself on the PC before it moved to mobile and for many Gen X, the PC is still where social media happens. The other point this data might signal is that Millennials are embracing other devices, like the PC, albeit only a little, when it comes to voice communication.

I shared some data earlier that shows how, when it comes to entertainment, Millennials seem to be happy to turn to the PC especially if they have been on the phone all day. When we asked device preference for video, however, 43% of the 18 to 34 years old on our panel picked their phone over their PC. Such preference might come down to privacy, apps or force of habit we will dig more into this in future studies. Whatever the reason, however, PC manufacturers often think that given a choice for content consumption users will pick the bigger screen and this data shows that this is not the case which should give them some food for thought.

More tidbits from the study showed us that Millennials are more comfortable using their phone as a hotspot, are less concerned about Wi-Fi security, but they are also less willing to share personal data to get it for free. Isn’t fascinating that a generation who lives on social media could be concerned about privacy? Certainly, a topic to explore more in the future, but for now maybe a warning that this generation is more complicated than it looks.

Rejuvenated Intel Highlights Benefits of Competition

Competition really is a great thing, and if you ever really needed a reminder of how and why, look no further than the recently rejuvenated, albeit humbled, semiconductor behemoth based in Santa Clara, CA.

After an extremely difficult 2018 that featured both major ongoing delays in producing new 10 nm chips and the unceremonious exit of its CEO, Intel is also facing the toughest competitive environment it’s seen in some time. Not only is a resurgent AMD becoming a serious threat in both the PC and server markets, but Nvidia has managed to snag most of the focus in the attention-grabbing AI market, and new offerings from Qualcomm show that its computing capabilities are much stronger than many may have realized.

As a result of all these challenges, Intel has been forced to rethink a number of its previous investments, reorganize its increasingly scattered divisions, and put together a strategy that could both directly address the new competitive environment and leverage many of the unique capabilities that make Intel what it still is today (lest we forget): the largest semiconductor company in the world.

Thankfully, the company recently laid out its new vision through a series of announcements about new technology directions and strategy delivered at an industry analyst summit and tech press event. Specifically, on the technology side, the company discussed a new variation on the “chiplet” concept that leverages a new 3D chip-stacking technology codenamed Foveros. Instead of trying to continue along the traditional Moore’s Law path of increasing transistor density horizontally via large and complicated monolithic chips created on a single process technology node, Foveros technology represents an important pivot towards vertical density. Practically speaking, what this means is that the company can combine several different chips created at different process sizes, while still increasing overall transistor density, in a single chip package. It’s a fascinating development that highlights how Intel is still able to maintain its long history of manufacturing advances, despite the challenges it faced in bringing 10 nm chips to market.

The first real-world example of Foveros’ ability to integrate heterogenous pieces together is the newly announced Sunny Cove architecture, expected to ship in 2019, which will combine both 10 nm Core “big” CPUs with several “little” Atom CPUs, into a new hybrid x86 architecture (which, yes, sounds conceptually very similar to the “big.Little” architecture designs that Arm and its customers have been talking about for years). The idea is to enable much more power-friendly x86 designs—it will be interesting to see what kinds of devices this new platform will enable.

At the strategic level, the company highlighted a new approach built on six pillars—Process, Architecture, Memory, Interconnect, Security and Software—that manages to tie together a number of different resources that Intel owns into a nicely unified, and powerful, vision of the future of computing. The process advances are built not just on Foveros, but the simultaneous work its been doing for both 7nm and 5nm, both of which are expected to benefit from the hard-won lessons the company learned on 10nm. Throw in the announcements about plans for new fabs and it’s clear the company is focused on moving forward aggressively on the process front.

Architecturally, the company discussed both the wide range of different architectures it’s creating, including CPU (scalar), GPU (vector), AI (matrix) and FPGA (spatial), compute offerings, as well as advancements in each of those areas. Over the years, Intel has amassed an impressive collection of different companies and architectures, but this was the first time it provided a unified vision that tied all the pieces together. The event also saw the first release of a few more details on their upcoming dedicated GPU effort, currently codenamed Xe and scheduled for release in 2020.

On the memory side, the company highlighted its advances in Optane storage and memory products. Intel emphasized new types of memory that break down the barriers between traditional DRAM and storage, and enable the creation of more sophisticated and much faster overall computing system designs. These memory capabilities are a unique and often overlooked advantage Intel offers versus most all of its competition. Given the exploding amounts of data being created and processed, these memory technologies will be critically important for the increasingly large data sets that data center-based components are going to need to have. (The fact that a simpler form of 3D stacking process technology is also used to build many Optane parts certainly doesn’t hurt either.)

The need to provide better and faster connections between various elements is another key capability in building more sophisticated chip and system designs in an increasingly heterogenous computing world. True to form, Intel talked about a wide range of options it offers in this area as well. From 5G modems to silicon photonics to new ultra-high-speed serial connections between chiplet components in stacked 3D designs, Intel has a number of interconnect technologies that it can leverage in future components and devices.

Security, of course, is a key factor for any company today, and, though Intel has faced some big concerns around Spectre, Meltdown, and other related chip architecture flaws, the company recognizes the need to incorporate security capabilities into all of its offerings. In particular, Intel is investing to integrate a multi-prong security story that reaches across the chip level, SOC level, board level, and software level to ensure the safest possible devices.

Finally, one of the most audacious new goals for Intel is a new software strategy built around what they’re temporarily calling One API. The basic concept is to create a layer of software abstraction that would allow programmers to write at this higher level, then smartly take advantage of whatever hardware system capabilities are available in a given system, from hybrid chip architectures to unique memory offerings and more. In theory, this includes the ability to send certain bits of code to one chip type and other chunks to other types while still maintaining most of the raw performance that would be available if programmers wrote straight to the metal. It’s a goal that many people have talked about—and it still remains to be seen if Intel can execute on it—but it would certainly provide a key advantage to Intel in an increasingly heterogeneous computing world.

In addition to these important technology and strategy announcements, it was clear that there was a new attitude within Intel’s executive ranks. In addition to a humbler approach, the company openly talked about being a smaller player in a bigger market. Clearly, the goal was to re-emphasize the fact that the company is now seeing themselves being able to participate in a broader range of opportunities than it traditionally has. There was even a joking reference about bringing back former CEO Andy Grove’s desire for Intel to always be paranoid about the competition—a quality that, frankly, seemed to fade over the last few years.
At roughly 107,000 employees, Intel is a very large organization, and it can often be tough to turn big ships around. It’s clear, however, that there’s a fresh attitude and approach there that certainly makes them appear to be much better prepared for an increasingly diverse computing future. Now, if they could only fill that CEO job….

What Tech Companies Know About Their Future

I recently did an interview with a national publication about the bad year tech has had in 2018. If you believe the tech media, Silicon Valley is dead, and the FAANG companies are so big they need to be regulated. There is no question that tech has a big black eye due to security breaches, some of their roles in fake news, and a plethora of other things that make them look like Darth Vader.

These companies are growing and in many cases planning to expand. Google is creating a new working village in downtown San Jose that can accommodate up to 10,000. Facebook has bought new property near their current campus, and Apple announced last week that they will build a 133,000 square foot campus in Austin and add new regional centers on Seattle and L.A, on top of their new campus being built in North San Jose now and smaller one in Sunnyvale, CA. And Amazon is building two new headquarters in New York and Northern Virginia, and even Netflix has expansion plans in the works.

If these companies are in trouble, it is hard to see given that they are confident about their future and have these kinds of expansion plans in the works. It suggests they know something that the media and their detractors don’t’ understand or don’t want to understand and instead push the vilifying argument against them instead.

But the truth is that tech is now at the center of all of our lives and is only going to be more expansive over time. I would argue that we are only half way through what is a 70-80 year journey from analog-to-digital. We are on the cusp of delivering all types of new technologies from self-driving cars, smart cities, connected homes, along with VR, AR, AI, machine learning and robotics based technologies that will eventually encompass all that we do in our tech-driven future. Tech is on track to impact and influence pretty much every aspect of our business and personal lives.

I don’t want to make light of the serious issues that big tech is facing today from the possibility of some governmental regulation as well as trying to find ways to keep their customers more secure. Companies like Facebook and Twitter have a bigger problem keeping people from posting false and misleading information to the broader issue of how social media has become a threat to democracies.

All of the big tech companies understand that if they give people what they want and need in the way of digital information, services, and products, customers will back them by using their products, clicking on adds and watching more and more of their video consumption in digital formats on an array of devices.

Off course, the media gets more clicks and readers if they focus on the dark side of tech. And to be fair, the dark side does need to be highlighted when its impact is truly negative and harmful. This is especially true on the subject of AI and its potential impact on our world and security breaches that cost customer’s real money. However, there needs to be a balance in this type of coverage, especially from the non-tech media who only occasionally cover tech and mostly when the news is negative.

Besides tech driving our economy and being the top job-creation engine in the world, technology has played a significant role in making our lives better and makes us more productive and efficient in our every day lives. Technology in all forms is on track to drive even higher economic growth and impact our lives in many ways, which will encourage greater business and consumer demand in the future.

That is why the FAANG companies are expanding as they truly understand the fact that our transition from Analog to Digital has a long way to go. If they can keep creating products that their customers want and take necessary steps when needed to correct the negative areas that plague some of them, their customers will continue to support them, which allows them to grow and expand in the process.

Podcast: Qualcomm Tech Summit, Intel Analyst Event, Nvidia Robotics

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing the recent Qualcomm Snapdragon Tech Summit event and its impact on 5G, smartphones and computing, discussing the importance of Intel’s recent Analyst Summit and strategy announcements, and talking about news from Nvidia on a new robotics platform update.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

What To Expect From the First 5G Phones

Over the next several weeks and months, the first ‘mobile’ 5G services will be introduced by AT&T and then subsequently by the other major U.S. operators. The initial devices will be 5G mobile ‘hotspots’ (or ‘pucks’), manufactured by Netgear and inseego. But during the first half of 2019, we will also see the first 5G phones from Samsung and Motorola, with several other leading Android OEMs expected to introduce 5G phones before the end of the year. What will these phones look like, and what should we expect?

From a design standpoint, there won’t be anything earth-shattering here. Since 5G will largely be in ‘experiment’ mode, given the early days of standards-based equipment and limited initial deployments, I’m expecting the OEMs to be relatively conservative with regard to their first wave of 5G-enabled devices. These initial 5G phones will look like many premium or flagship phones sold today. The Qualcomm Snapdragon 855 processor will be lightning fast, and the phones will undoubtedly sport high-end photo and video capabilities that will help to showcase 5G in some way.

The initial 5G experience will be sort of like ‘Super Wi-Fi’, in that there will be islands of 5G coverage within a city, with the phone then defaulting to 4G LTE when not in a 5G zone. 5G will be initially be available in a handful of cities (AT&T’s list here, Sprint’s list here), and we expect the 5G-specific coverage in those cities to cover a modest footprint, initially (the operators have been very cagey about specifying the extent of initial 5G coverage). When in an area of 5G coverage, there will be a special indicator on the device, and the speed difference will be noticeable. It will be sort of like the difference between a good 4G LTE connection today (~50 Mbps) and then getting home and noticing the speed and latency lift of a super speedy home broadband service (~150 Mbps or more). That type of experience will be available on ‘islands’ of 5G coverage within cities, again sort of like a super Wi-Fi hotspot. It will work best in a stationary situation, might work when walking, and will certainly switch over to 4G when driving.

There might also be a difference in the experience depending on the operator, though hard to peg exactly because deployments will vary by operator and by market, especially in the early days. But as one generalization, expect AT&T and Verizon’s mmWave-based 5G services to be speedier than their rivals, but coverage to be most ‘hotspot’-esque. Sprint’s won’t be as fast, but coverage within a city might be more predictable. T-Mobile is going for broader coverage by focusing on 600 MHz for its initial 5G service, but the speed won’t be as dramatically different than 4G. TMO is also a bit of a wildcard, on two fronts: the uncertainty of outcome and timing of the proposed Sprint deal, which has a huge bearing on its 5G strategy; and the fact that Qualcomm’s Snapdragon 855 chip doesn’t support 600 MHz has certainly thrown T-Mobile a curve ball.

For those who recall the rollout of 4G LTE in the 2010-2012 era, there will be similarities. Even with all the hype around 5G, the 4G roadmap is pretty compelling. ‘Gigabit’ LTE is available in an increasing number of cities, as operators deploy a mix of 4×4 MIMO, carrier aggregation, 256 QAM, and LAA. The Snapdragon 855 chip, in addition to its 5G capabilities, supports 2 gigabit LTE.

Remember that for a time, AT&T and T-Mobile marketed their 3.5G (HSPA+) services as ‘4G’, as they were arguably as fast as some of the initial LTE services (especially that introduced by MetroPCS using only a 5 MHz channel). We might see that playbook repeated, since the best of 4G LTE will be nearly as good as, or even better than, early 5G, in some instances.

There are some other quirks and question marks. All voice services, at least for the next few years, will use the LTE bearer (VoLTE in most instances). There’s no great incentive to migrate to “Voice over 5G” until 5G Standalone (SA) networks are built (DISH, anyone?). Handoff is going to be another interesting matter. Although the control plane is through LTE, it will be interesting to see whether inter-carrier handovers from 5G to 4G are handled smoothly. I’m most intrigued by what the experience will be going from a high-band mmWave session to LTE. I’m hoping the process will be smoother and more seamless than the earlier days of VoLTE/Voice over Wi-Fi. As another plot-thickener, add Wi-Fi to the mix here, and there’s likely to be some quirkiness moving between 5G, LTE, and Wi-Fi.

Battery life is another question mark. The big litmus test will be how phones deal with mmWave signals. We all know that battery drain accelerates when one is further from a site and the phone has to ‘work harder’ to get a signal. New methods are being employed to account for the shorter range of mmWave-based so as not to overly affect power consumption, but this nevertheless bears watching.

5G smartphones will become more differentiated and compelling in the 2020-2021 time frame, as coverage becomes more broadly available and we see some of the capabilities of the ‘next wave’ of 5G, such as ultra-low latency, introduced. This is when we’ll see the development of apps and content that harness some of the true capabilities of 5G, such as in the AR/VR and gaming spaces. One can also expect AT&T to increasingly leverage its DTV and Warner Media assets as part of its 5G strategy, for example offering attractive video bundles, HD content, and more generous allowances for rich media that reflect the lower cost to deliver data in a 5G world.

The expected availability of a 5G from Apple in 2020 will also galvanize the developer community to create apps and content that will showcase 5G phones and help create the justification for what will likely be premium prices for those devices.

What I Learnt about My Kid After a Week with a Phone

My daughter is almost eleven, and she is yet to get her own phone. She really does not need one as she is homeschooled and we are the ones taking her to most of her activities. For work and play, she has both a Surface Laptop and an iPad. But last week was different as I was attending the Qualcomm Snapdragon Summit in Maui and she came with me. I let her borrow an iPhone XR so she could keep me posted of her and my mom’s whereabouts over iMessage.

Electronics at home are an “earned privilege.” Most days we trade electronics time for reading, focus during homeschool time or outstanding behavior. Of course on holiday with grandma, while mom was busy in meetings, that earning factor went out the window and there was access to an iPad in the room and an iPhone outside.

Not that I planned on it, but this “experiment” of mine came on the back of the 60 minutes special on-screen addiction, which provided a lot of information of the impact of screen-time on young brains. If you watched that or just read through the summary here, you will think I am crazy to think that after a week try-out I still plan to give my child a phone. The core reason behind my decision is that I saw how the phone in the big-wide-world for her was a tool even more so than any electronics she uses at home.

Personalization, Mobility, and Camera

Despite knowing that the phone was a week loaner, my daughter took great care in making it hers. After using her Apple ID to log in and restore from her iPad backup, she changed her locked screen, her background, chose her screen brightness and sound – which to my surprise was not mute as it has been for me for years now. You could clearly see that personalizing the phone was a way to express herself.

It was also evident that the increased mobility of the phone form factor over her iPad got her to use it more across the board but also a little differently. This meant taking advantage of different features like maps to navigate our outings, nature apps to recognize local flowers and fish, and a translation app from English to Hawaiian.

The camera that at home is mostly used to have fun with stickers and silly videos became a tool to record fun moments with a friend or document local wild-life not for posting on social media but merely to make memories. Thankfully, aside from what mom posts, my kid has no social media presence, which for some parents, I know, is too much already!

Different Gaming

Seeing how my daughter can turn into a “gremlin fed after midnight” when at home she has to stop gaming, I was concerned that a phone would just make things worse. I was amazed to see that she was playing different games to what she usually plays on her iPad. Minecraft, Fortnite, and Roblox were put aside for Sonic, puzzle games, and brain training games.

It seemed as though, her iPad is her proper gaming device while the phone was indeed a mobile gaming device used more as gap filler with simpler games. The proper gaming session would return in the evening when the phone turned into her music player, and the iPad resurfaced for gaming.

While we have consoles at home, the time spent on them is somewhat limited mostly because my child is a touchscreen gamer by nature and the controllers, to this day, feel a little foreign to her. I do wonder if the success of the Nintendo Switch – which is on her Santa’s list – is due, in part, to its ability to bridge the console and tablet experience so well.

Obsession vs. Addiction

What transpired from the week was also the difference between obsession and addiction. There is no doubt that some games lead to behaviors that closely resemble addiction. For my kid Fortnite, Minecraft and Roblox certainly show the dark side of her. Yet, what drives her is a little different for each game. With Fortnite it is about team play and not letting the team down in the game, Minecraft is about accomplishment and in Roblox is more about the social aspect of it, players are friends from school we talk to!

Outside of gaming though I feel that it is more about little obsessions rather than addiction. This is no different than discovering a book series and wanting to read the whole set in one go, or watch a movie like “Black Panther” enough times to know almost the entire script!

So in the week with her phone, I saw the current Avengers craze moving from the TV screen and comics to memes, a new vehicle for her obsession enabled by the internet. I am sure that in a few months we would move on to something else in the same way we liked pizza for the past six months and now we hate it!

My main issue with studies that look at screen time as a generic thing is that things are not as simple as that. So I do not dispute science. I am sure young brains are affected by what kids do on these devices, but it is precisely what they do that we need to look into. One data point mentioned in the program was that toddlers asked to return an iPad used to play an instrument in a game did so 45% if the time while toddlers who played a real instrument did so 60% of the time.  The key here is not the screen, but the app and the gratification the app gave through different stimuli. I am sure if you tested kids doing math with pen and paper and kids doing the math on an iPad they would stop at the same rate when asked!

My Key Takeaway

So what did we learn?

For my daughter, after a week in Paradise, the biggest sadness came not from leaving sunshine, pools, and turtles but rather from returning the phone to me.

For me, this week was key to understand that learning how my child uses technology is no different than figuring out what sports she wants to engage with, what movies are appropriate for her and to some extent what kind of human being she should be. Like anything else, kids need guidance on what is right for them and what it is not, but this has much more to do with how they engage with the screen than screen time per se. In other words, not all activities done through a screen are created equal, and it is up to me as a parent to guide her to those that enrich her life. Of course, guidance alone is not going to be enough, if you are a parent you know that, so having tools that help you monitor, set the right access and making sure your child is not taken advantage of are indispensable. Pandora’s box does not have to be ripped wide open!

Microsoft Browser Shift Has Major Implications for Software and Devices

Sometimes it’s the news behind the news that’s really important. Such is the case with the recent announcement from Microsoft that they plan to start using the open source software-based Chromium project as a basis for future versions of their Edge browser. At a basic level, it’s an important (and surprising) move that seems significant for web developers and those who like to track web standards. For typical end users, though, it seems a bit ho-hum, as it basically involves under-the-hood changes that few people are likely to think much about or even notice.

However, the long-term implications of the move could lead to some profoundly important changes to the kinds of software we use, the types of devices we buy, the chips that power them, and much more.

The primary reason for this is that by adopting Chromium as the rendering engine for Edge, Microsoft should finally be able to unleash the full potential of the platform-independent, web-focused, HTML5-style software vision we were promised nearly a decade ago. If you’ll recall, initial assurances around HTML5 said that it was going to enable software that could run consistently within any compatible browser, regardless of the underlying operating system. For software developers, it would finally deliver on Java’s initial promise of “write once, run anywhere.” In other words, we could finally get to a world where everyone could get access to all the best software, regardless of the devices we use and own, and the ability to move our own data and services across these devices would become simple and seamless.

Alas, as with Java, the grandiose visions of what was meant to be, didn’t come to pass. Instead, HTML5-based applications struggled with performance and compatibility issues across platforms and devices. As a result, the potential nirvana of a seamless mesh of computing capabilities surrounding us never came to be, and we continue to struggle with getting everything we own to work together in a simple, straightforward way.

Of course, some might argue that they prefer the flexibility of choices and unique platform characteristics, despite the challenges of integrating across multiple platforms, application types, etc., and that’s certainly a legitimate point. However, even in the world of consistent software standards, there was never an intention to prevent choice or the ability to customize applications. For example, even though Chromium is also the web rendering engine for Google’s Chrome browser, Microsoft’s plan is to leverage some of the underlying standards and mechanisms in Chromium to create a better, more compatible version of Edge, but not build a clone of Chrome. That may sound subtle, but it’s actually an important point that will allow each of these companies (as well as others who leverage Chromium, such as Amazon) to continue to add their own secret sauce and provide special links to their own services and other offerings.

By moving the massive base of Windows users (as well as Edge browser users on the Mac, Android, and iOS, because Microsoft announced their intentions to build Chromium-powered browsers for all those platforms as well), the company has single-handedly shifted the balance of web and browser-based standards towards Chromium. This means that application developers can now concentrate more of their efforts on this standard and ensure that a wider range of applications will be available—and work in a consistent fashion—across multiple devices and platforms.

There are some concerns that this shifts too much power into the hands of a single standard and, some are worried, to Google itself, since it started the Chromium project. However, Chromium is not the same as Chrome (despite the similar name). It’s an open source-based project that anyone can use and add to. With Microsoft’s new support, they’ve ensured that their army of developers, as well as others who have supported the Microsoft ecosystem, will now support Chromium. This, in turn, will dramatically increase the number of developers working on Chromium and, therefore, improve its quality and capabilities (in theory, at least).

The real-world software implications for this could be profound, especially because Microsoft has promised to embed Chromium support into Windows. What this will do is allow web-based applications access to things like the file system, being able to work offline, touch support, and other core system functions that have previously prevented browser-based apps from truly competing against stand-alone apps. This concept, also known as progressive web apps (PWA), is seen as being critical in redefining how apps are created, distributed, and used.

For consumers, this means the need to worry about OS-specific mobile apps or desktop applications could go away. Developers would have the freedom to write applications that have all the capabilities of a stand-alone app, yet can be run through a browser and, most importantly, can run across virtually any device. Software choices should go up dramatically, and the ability to have multiple applications and services work together—even across platforms and devices—should be significantly easier as well.

For enterprise software developers, this should open the floodgates of cloud-based applications even further. It should also help companies move away from dependencies on legacy applications and early Internet Explorer-based custom enterprise applications. From traditional enterprise software vendors like SAP, Oracle, and IBM through modern cloud-based players like Salesforce, Slack, and Workday, the ability to focus more of their efforts on a single target platform should open up a wealth of innovation and reduce difficult cross-platform testing efforts.

But it’s not just the software world that’s going to be impacted by this decision. Semiconductors and the types of devices that we may start to use could be affected as well. For example, Microsoft is leveraging this shift to Chromium as part of an effort to bring broader software compatibility to Arm-based CPUs, particularly the Windows on Snapdragon offerings from Qualcomm, like the brand-new Snapdragon 8cx. By working on bringing the underlying compatibility of Chromium to Windows-focused Arm64 processors, Microsoft is going to make it significantly easier for software developers to create applications that run on these devices. This would remove the last significant hurdle that has kept these devices from reaching mainstream buyers in the consumer and enterprise world, and it could turn them into serious contenders versus traditional X86-based CPUs from Intel and AMD.

On the device side, this move also opens up the possibility for a wider variety of form factors and for more ambient computing types of services. By essentially enabling a single, consistent target platform that could leverage the essential input characteristics of desktop devices (mice and keyboards), mobile devices (touch), and voice-based interfaces, Microsoft is laying the groundwork for a potentially fascinating computing future. Imagine, for example, a foldable multi-screen device that offers something like a traditional Android front screen, then unfolds to a larger Windows (or Android)-based device that can leverage the exact same applications and data, but with subtle UI enhancements optimized for each environment. Or, think about a variety of different connected smart screens that allow you to easily jump from device to device but still leverage the same applications. The possibilities are endless.

Strategically, the move is a fascinating one for Microsoft. On one hand, it suggests a closer tie to Google, much like the built-in support for Android-based phones did in the latest version of Windows 10. However, it’s specifically being done through open source, and is likely to leverage its recent Github developer resource purchase to make web standards more open and less specifically tied to Google. At the same time, because Apple doesn’t currently support Chromium and is still focused on keeping its developers (and end users) more tightly tied into its proprietary OS, Microsoft is essentially further isolating Apple from key web software standards. In an olive branch move to Apple users, however, Microsoft has said that they will bring the Chromium-powered version of Edge to MacOS and likely iOS, essentially giving Apple users access to this new world of software, but via a Microsoft connection.

In the end, a large number of pieces have to come together in order to make this web-based, platform-free version of the software world come to pass, and it wouldn’t be the least bit surprising to see roadblocks arise along the way. Still, Microsoft’s move to support Chromium could prove to be a watershed moment that quietly, but importantly, sets some key future technology trends into motion.

VR Begins Transition From (Failed) Next Big Thing to Sustainable Business

The ongoing theme in the media for much of the last 12 months has been that Virtual Reality (VR) as a technology is a bust. And from a pure headset shipment number perspective, it’s been hard to argue against that narrative. But as with most thing in this world, the reality is a bit more nuanced than that. Further, I would argue that VR is now poised to move from a technology burdened with unrealistic expectations to one that will enable smart vendors engaged in the space to build out more modest, but profitable and sustainable businesses going forward.

The Headset Decline
At IDC we track three categories of VR headsets: Screenless viewers, such as Samsung’s Gear VR, Tethered, such as HTC Vive, and standalone, such as Oculus Go. (We exclude Google Cardboard-based products from our numbers.) If we look back two years, to 3Q16, we see that the entire market shipped 2.4M units, and that screenless viewers constituted over 2M of those units. And more than half of those units came from Samsung. As it often does, Samsung was moving to establish itself in the new category by leveraging its strong position in smartphones, often bundling its Gear VR screenless viewer at low or no cost with its high-end Galaxy phones. The company continued this practice for some time, but that 3Q16 number was the high point, and by 2018 it had all but given up on the Gear VR. In 3Q18 Samsung shipped just 125K of its screenless viewers.

Samsung’s early push and later shift away from the screenless viewer category caused the overall VR market to appear to grow fast (from a small base) and then fall off a cliff. But inside the bigger numbers, the other two headset categories were continuing to evolve. HTC and Facebook lowered the price of their headsets, and later HTC launched a Pro version of the Vive. Sony launched PlayStation VR. A number of vendors launched products using Microsoft’s Mixed Reality platform. And Lenovo, Vive and Oculus launched standalone products. All of this led to some notable ups and downs along the way, but here’s the bottom line. In 3Q16 tethered shipments totalled 372K; in 3Q18 they hit 1.1M. In 3Q16 standalone headsets were at 30K; in 3Q18 they grew to 392K. So, yes, the total market in 3Q18 was down versus the same quarter in 2016 (1.9M versus 2.4M), but the product mix shifted dramatically and revenues grew substantially.

Early Adopters Are Happy
So the headset market itself has been a wild ride over the past two years. During that time, we’ve seen a substantial build-out of the existing platforms and the content available on then. The challenge, when it comes to pleasing consumers, is that the type and quality of content out there seems to please current owners, but it doesn’t excite non-VR headset owners enough to buy. We recently surveyed over 2,000 U.S. consumers, and among the small subset of that group who currently owned VR headsets, most are happy with both the hardware itself and the content (especially those who own tethered and standalone products). However, when we asked non-owners about their interest in VR, the response was tepid at best. The clear challenge here is that to date there’s been no specific application or type of content compelling enough to drive more mainstream users to deal with the cost and hassle of acquiring the VR hardware. This obviously creates challenge for the market: How do you incentivise developers and content producers to create better experiences without a large enough installed base? How do you grow the installed base without better content?

While the industry ponders the consumer challenge, many vendors in the space have moved to embrace a near-term opportunity upon which they can build a business in the meantime: Business users.

I have talked about the opportunities for VR in the commercial segment in a previous column, so I won’t repeat the argument here except to say that since I wrote that back in April, interest from commercial has only grown. And vendors are moving to embrace this interest.
HTC’s Vive Pro is a great example of a company listening to what business users said they need. The hardware addressed business requirements, including a higher resolution screen, and HTC’s Vive Business Edition package rolled in a professional use license, commercial warranty, deployment options, and dedicated phone support.

Facebook has also been paying close attention to the commercial side of things and has built out business-specific bundles for both its Oculus Rift and Oculus Go products. Likewise, Lenovo is now offering its Mirage Solo VR headset as part of a bundle targeting education deployments.
As the business use case for VR continues to solidify, the biggest hurdle won’t be the hardware itself, but the need for more developers—and the tools they need—to build out both broad business VR applications as well as company-specific, proprietary ones. This will be a challenge, but there’s money to be made here, and it involves significantly less risk than trying to create consumer content that requires dramatically more scale to drive profitability.

Looking ahead, there’s reason for cautious optimism in the VR market. Early next year, Facebook will ship its Oculus Quest headset, a standalone product that offers significant performance gains for the category. I expect Facebook to tell a strong story about the use case for Quest in business. And we’ll see Facebook, HTC, Lenovo, and others continue to build out more interesting use cases for both consumer and commercial users. VR may not have lived up to the early hype, but the technology still has a role to play in our world. Companies who stay the course, and play it smart, should find a profitable, sustainable way forward.

PC Users’ Smartphone-Envy

Millions of people rely on their smartphone every day for their on the go computing needs. For many, especially in younger demographics smartphones are their sole or main computing device. Whether it is email, social media or gaming consumers across the world have become highly dependent on their phones so much so that the whole tech industry has started to address screen addiction. Considering how much time users spend on these devices, we at Creative Strategies wanted to understand how smartphones fit in people’s workflows and to do so we conducted an online study at the end of November across 1000 US consumers.

The first interesting data point we found is that it 34% of our panel has both a work PC/Mac and a personal one while only 15% relies solely on a work PC for both work and personal computing needs. Forty-three percent of our panel only has a personal PC/Mac. This landscape is fascinating as it points at the opportunity there still is to reach consumers and not just IT departments when it comes to PC sales. While engagement on PCs might have dropped when smartphones first hit the market, it is clear that PCs and Macs still have a place in our homes.

That said, just the fact that 56% of users on our panel do not have a Windows PC with a touchscreen points to an installed base that is ready for an upgrade. This is even more obvious when we see that 61% of the PC users on our panel said their PC has no support for pen.

A Reality Check on How People Work

Before getting into what users want for their PCs, I think it is interesting to look at how they are currently using them as this will provide excellent insight into how to market their next upgrade. Among the people who are currently working 47% said they are usually working from their office desk and another 30% work at their home office desk making mobility not a high priority among our panelists.

Work and life balance is still a struggle for most as we seem to rely on our phones to keep us on top of things without being dragged into work more than necessary. And so 40% of our working panelists check their email, calendar and social media every morning before leaving for work. Twenty-seven percent keep an eye on things on their phone in the evening trying not to open their PC while 11% are always on their phone in the evening but uses their PCs to either binge watch or game. Only 17 percent of our working panelists never start or finish their working day at home which makes me feel somewhat better about my work/life struggle!

When it comes to how people work across devices, there was much less consensus than we had for where work takes place. Our working panelists are quite varied in their habits of working on one machine or multiple ones. Seventeen percent never work on multiple devices, 12% often start working on a work device to end on a personal one at home, 23% only use their work device while 25% pick up a phone or a tablet for a quick edit. Twenty-three percent work seamlessly across devices depending on convenience. What is interesting is that this number only grows to 29% among early tech adopters pointing to the fact that working across devices does not necessarily require tech expertise these days, especially when the multiple device mix includes a phone.

Top Asks from PC users

With the phone being the most used device by many people it is no surprise that there are features that users will want to see on their PCs too. This is not about being able to do the same things they do on their phones, but rather it is about benefitting from some critical enablers of the experience their phones can deliver. It was evident among our panelists that voice calls, messaging, and social media are best dealt on a phone than a PC.

So when we asked consumers which features their smartphone has that they would want to see on their PC the wishlist reflected all the key qualities of a smartphone. First on the list is long battery life (36%) Instant-on (29%) and cellular connectivity (25%).

Interestingly, the second highest feature was face-authentication at 30%. This reflects my previous comment that the PCs owned by our panelists seem to be on the older side and of course, it also demonstrates the lack of FaceID support on the Mac. Considering about 40% of our panelists had the newest Apple and Samsung’s smartphones which support face authentication this ask is not a big surprise and as more phone manufacturers embrace face authentication the need to support it on the PC/Mac will grow. For PC makers this is already an option as Windows supports Windows Hello, but it will be interesting to see what Google and Apple will do going forward.

Pain Points Are Not Always a Driver

In technology, I often find myself pushing service providers or hardware manufacturers to look at solving real-life problems to drive uptake of services or hardware refresh.

It is interesting how, when it comes to connectivity, consumers do not see it as a pain point, but they still want it. We asked our panelists how easy they feel it is to find an internet connection for their PC/Mac/Chromebook when they need it: 31% said it is very easy because they only use a computer at home on WiFi and 17% do the same at their office/campus. Twenty percent said it is not a problem as they mostly work from their desk where their computer is connected. Eighteen percent uses their smartphone as a hotspot and only 5% who are highly mobile users admit that finding connectivity is a constant challenge.

If this were the issue Always-on and Always-connected PCs aimed to solve it seems that the pitch to deliver the kind connectivity that only 6% of our panelists experience with their connected PCs would not lead to much of an uptake. Yet, it is clear to me from the fact that 25% said they would want a cellular connection that when we talk about connectivity, it is not a question of solving a problem but rather delivering a level of convenience we have got accustomed to with our phones.

If I am right, what PC makers, Qualcomm, and carriers will be able to offer in terms of plan activation and compelling data pricing will be the key to the success of the Always-on, Always-connected PC. Offering that convenience for a free trial period will get users to never want to give it up, setting the bar for what the next computing experience should be like.

The Connected PC

Sometimes it takes real world frustrations before you can really appreciate the advances that technology can bring. Such is the case with mobile broadband-equipped notebook PCs.

Before diving into the details of why I’m saying this, I have to admit upfront that I’ve been a skeptic of cellular-capable notebooks for a very long time. As a long-time observer of, data collector for, and prognosticator on the PC market, I clearly recall several failed attempts at trying to integrate cellular modems into PCs over the last 15 years or so. From the early days of 3G, and even into the first few years of 4G-capable devices, PC makers have been trying to add cellular connectivity into their devices. However, attach rates in most parts of the world (Western Europe being the sole exception) have been extremely low—typically, in the low single digits.

The primary reasons for this limited success have been cost—both for the modem and cellular services—as well as the ease and ubiquity of WiFi and personal hotspot functions integrated into our smartphones. Together, these factors have put the value of cellular connectivity into question. It’s often hard to justify the additional costs for integrated mobile broadband, especially when the essentially “free” alternatives seem acceptable.

Despite all these concerns, however, we’ve seen a great deal of fresh attention being paid to cellular connected PCs of late. Specifically, the launch of the always connected PC (ACPC) effort by Microsoft, Qualcomm, and several major PC OEMs (HP, Asus, and Lenovo) this time last year brought new attention to the category and started to shift the discussion of PC performance towards connectivity, in addition to traditional CPU-driven metrics. Since that first launch with Snapdragon 835-based devices, we’ve already seen second generation Snapdragon 850-based PCs, such as Lenovo’s Yoga C630, start to ship.

We’ve also seen Intel bring its own modems into the PC market in a big way over the last few months, highlighting the increased connectivity options they enable. In the new HP Spectre Folio leather-wrapped PC, for example, Intel created a multi-chip module that integrates its Amber Lake Y-Series CPU, along with an XMM 7560 Gigabit LTE modem. Conceptually, it’s similar to the chiplet-style design that combined an Intel CPU and AMD Radeon GPU into a single multi-chip module that Dell used in its XPS 15 earlier this year, but integrates a discrete modem instead of the discrete GPU.

Together these efforts, as well as expected advancements, highlight the many key technological enhancements in semiconductor design that are being directed towards connectivity in PCs. Plus, with the launch of 5G-capable modems and 5G-enabled PCs on the very near horizon, it’s clear that we’ll be enjoying even more of these chip design-driven benefits in the future.

Even more importantly, changes in the wireless landscape and our interactions with it are bringing a new sense of pertinence and criticality to our wireless connections. While we have been highly dependent on wireless connections in our PCs for some time, the degree of dependence has now grown to the point where most people really do need (and expect) reliable, high-quality signals all the time.

This point hit home recently after I had boarded a plane but needed to finish a few critical emails before we took off. Unfortunately, the availability and quality of WiFi connections while people are getting seated is dicey at best. But by leveraging the integrated cellular modem in my Spectre Folio review unit, I was able to do so no problem. Similarly, in a long Lyft ride to an airport on another recent trip, I leveraged the modem in the Yoga C630 for similar purposes. Plus, in situations like conferences and other events where WiFi connections are often spotty, having a cellular connectivity alternative can be the difference between having a usable connection and not having one at all.

Admittedly, these are first-world problems and not everybody needs to have reliable connectivity in these types of limited situations. In other words, I don’t think the extra cost of integrated cellular modems makes sense for everyone. But, for people who are on the run a lot, the extra convenience can really make a difference. This is another example of the fact that many of the technological advances that we now see in the PC market are generally more incremental and meant to improve certain situations or use cases. Integrated cellular connections are in line with this kind of thinking as they provide an incremental boost in the ability to find a usable internet connection.

In addition to convenience, the increase of WiFi network-based security risks has raised concerns about using public WiFi networks in certain environments. While not perfect, cellular connections are generally understood to be more secure and much less vulnerable to any kind of network snooping than WiFi, providing more peace of mind for critical or sensitive information.

Of course, little of this would matter if network operators didn’t make pricing plans for cellular data usage on PCs attractive. Thankfully, there have been improvements here as well, but there’s still a long way to go to truly make this part of the connected PC experience friction-free. The expected 2019 launch of 5G-equipped notebooks will likely trigger a fresh round of pricing options for connected PC data plans, so it will be interesting to see what happens then.

Ultimately, while some of the primary concerns around the connected PC remain, it’s also becoming clear that many other issues are starting to paint the technology in a light. Always on, always reliable connections are no longer just a “nice to have,” but a “need to have” for many people, and along with the technology advancements, increased security and lower data plan costs are combining to create an environment where connected PCs finally start to make sense.

When Companies Don’t Know When to Stop

I’ve been wondering whether some of the major high-tech companies among the FAANG gang, don’t know when to stop their “continuous improvement,” a term revered by the Japanese in the 90s when they continually improved their products. My observation is we are seeing “continuous degradation” instead.

It seems these companies might have too many engineers spending too much time on the wrong tasks, continuing to invent after the products have reached a level of excellence. I see one example after another and imagine how it might come about. Perhaps these companies have engineers that need to justify their high pay, so they do what comes naturally, keep coming up with new ideas. But in doing so, they run the risk of making their products worse.

Take Facebook, for example. When it allowed us to share vacation photos and updates with our friends and relatives, it was fun to use. I’d be able to see my daughter’s photos when she was vacationing in Hawaii every day, providing peace of mind along with a smile. It was much more effective and less intrusive than phone calls. But over time we all know what happened. Facebook added news that was full of fake and incendiary stories. It took more time to digest them than just looking at a photo, so Facebook figured out that news was better because we spent more time with it. And every change Facebook made was to extend our time of engagement. They took a delightful, pleasing experience and turned it into one that created anger, frustration, and was just no longer fun.

Another example is what Apple has done to its MacBook line. Five years ago, they had some of the best notebooks in the industry, far ahead of the competition. If you compare today’s MacBooks to those of five years ago, they’ve regressed. They have far fewer ports, no memory card slot, no headphone jack, no longer the beloved MagSafe power connector, and have some of the worst keyboards in the industry. While they attribute some of these changes being needed to make their products thinner and lighter, that’s not true, because many Windows notebooks are just as thin and light in weight, while still retaining the ports and excellent keyboards. The engineers should have left well enough alone.

Then there’s Google. We understand that their business model is to learn about us and direct more relevant advertising to us, and we agreed to have them track our travels in return for using Google Maps. We chose to use Gmail with its huge amount of storage and effective search in exchange for allowing them to scan email content to serve up more relevant ads. But now that they have perfected that model, they want to do more, that will make them evil. The group of engineers responsible for some of the Nest products was recently issued patents for putting sensors, microphones, and cameras throughout our homes to accumulate more personal information than most of us would be comfortable with. There’s no stopping Google until they know every tiny personal detail about us. They’re following the model of Facebook, and we know how that’s turning out.

Lastly, Amazon has created an amazing online store. While not the most attractive, it’s worked well. Rarely do we need to search for the right button, our choices are clearly laid out, and we can generally find things fast, read their reviews, and make a purchase quickly. But over the past year, they’ve not left well enough alone. Often now when searching for a product, you get the first page filled with paid ads, making it more difficult to do what you came to do. Some ads are even deceptive, as a recent article described, placing ads in the middle of a bridal registry, tricking buyers to purchase an item they assumed was requested by the bride. When making some purchases, I’m now constantly asked if I want a warranty or a subscription for much of what I buy; many times, those options are not even appropriate. And what once was simple shipping options have now become as difficult as choosing another product. Again, the engineers at Amazon seem not to know when they had a successful site and are now making it much more difficult to use.

I’m not against progress when it improves things that benefit the customer. But, in the case of some of these giants of tech, their greed seems to have taken over, messing up what once were excellent products.

Podcast: Amazon AWS reInvent, HP Inc. and Dell Earnings, Apple Music and Amazon

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing multiple announcements from Amazon’s AWS re:Invent conference, including the launch of several new custom chips, discussing the impact of HP’s and Dell’s earnings and what it means for the PC market, and chatting about the new agreement that will let Apple Music work on Amazon Echo devices.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Let’s Stop Politicizing the Data-Privacy Issue

This has been a year when significant light has been shed on how technology companies such as Facebook and Google leverage customer data for financial gain. Some of this should not be surprising, since little in life, after all, is free. The grand bargain has been that we get to use Google, Facebook, Twitter, and the like, in exchange for their accessing and using some aspect of our data – the modern day version why broadcast TV has always been ‘free’. Things went wrong when these companies abused that privilege, have not been transparent, and have been slow or resistant to implement change. At the least, the transgressors (a pretty long list) have shown remarkably poor judgment, with the worst offenders acting immorally and perhaps illegally.

That said, I’ve been bothered by how some of the companies who reckon themselves as the ‘good guys’ have been using this opportunity pile on their Silicon Valley brethren for competitive gain. Tim Cook has publicly vilified Facebook on several occasions, most recently in a high-profile speech in Europe. This week, Ginni Rometty, CEO of IBM, joined Tim Cook and others in criticizing some companies’ abuse of consumer data.

I find this behavior somewhat hypocritical. Google, Apple, Facebook, especially, are hugely inter-related. None of them would be what they are, and as profitable as they are, without each other. With so much that is politicized these days, I don’t think most of us are anxious to see Silicon Valley polarized as well. And while Apple hasn’t ventured into the same territory as Facebook, I can’t exactly see Tim Cook as tech’s White Knight, given Apple’s at the very least indirect role in fomenting the modern day near health crisis of screen addiction.

So here’s an idea. Rather than more useless Congressional hearings or the development of regulations that will take years to develop and implement (the debate about which will also be undoubtedly politicized), how about the Silicon Valley brain trust coming together to fix this? These are the folks who can be proactive in developing some ‘rules of the road’ that could protect consumers and mollify regulators, while ensuring that their primary means of making money is not significantly compromised.

It is not any one company that is going to fix this. We might be able to come up with some basic minimum standards regarding the use of consumer data. But there are many other elements to a successful implementation. One is transparency. This will go a long way in restoring trust. And by the way, part of this might involve tiers of relationships with consumers. For example, would some consumers be prepared to pay a modest fee to use some of these apps free of ads or data utilization, in the same way there are different subscription options for Hulu?

Another key aspect of this is defining a common standard for consumers to see how their data is used and settings for various levels of opt-in. Facebook has done some of this in reaction to all the hullabaloo this year, but the settings are still somewhat buried and the interface is not intuitive. It will not be helpful if the experience and UI is completely different from one company to the next. I’d love to see a common Dashboard that cuts across the major consumer applications. This could be a great output of some collective work by some of Silicon Valley’s leaders.

And here’s an opportunity for Apple, especially in the U.S., where they still own the majority of the smartphone market. They clearly have deep expertise in software and user-friendly design. The ‘screen time’ settings are a good initial effort at addressing that issue…why not offer to port that development to a ‘consumer data settings dashboard’.

If Tim Cook and Ginny Rometty want to be Silicon Valley’s white knights, why not stop vilifying their colleagues and, instead, say to their tech brethren: “Guys, we have a problem. Let’s use our collective resources (software, UI, AI) to head this thing off at the pass”. By heading this off at the pass, I mean not leaving it to Washington, and not developing something quite as overwrought as GDPR, which might not fully fly in the more market-oriented U.S.of A.

Several key tech company execs have already said they would be open to some form of regulation. So perhaps it would be more effective to initiate this from the Valley rather than the Beltway. Call it the ‘Consumer Data Privacy Task Force’, deputize a couple of senior execs from each of the major players, and have them come up with a plan to present to Congress. Address the three key issues: Rules for what consumer data can be shared, how and by whom; How this get communicated to consumers; and what’s the optimal way to give users some visibility into and control over their data in a standardized, intuitive fashion.

We’re at a fork in the road on this issue. It has become politicized, and is in danger of falling into the hands of regulators, who are ill-equipped, on numerous fronts, to effectively address it (see: Facebook Congressional hearings). The big question is, can Silicon Valley step up?

Two Tech Trends That Failed to Save My Black Friday

I make no secret that I love to shop. I like buying things rather than the actual process of shopping. Before I get out the door, I know which stores I will go to and have a clear idea of what I need or want to buy. While my outing might still take a couple of hours, it is a targeted and organized operation, despite what my husband and kid might say.

As I take pleasure in buying not shopping, I do a fair amount online. Amazon is my friend, but so is a long list of websites that carry my favorite brands, accept Apple Pay or Paypal and offer free shipping.

When big sales days come around, like Black Friday, I prioritize online shopping, but I will go to some physical stores mostly for clothes and accessories. This year I found going to the local mall an excruciating process, more so than it usually is, and mostly because I saw all the different ways tech could have made it better but didn’t.

AI and Big Data

Let’s start with how stupid the shopping process still in. Both online and in-store there is little or no intelligence used by retailers to make your experience less painful and more rewarding for you and them.

This lack of intelligence starts from home where you are inundated by ads leading up to the big day that more often than not poorly reflect your buying habits. This is particularly ironic this year when both tech and politics have spent several months discussing privacy and how much information internet giants should have access to. Well, right now they, as well as most of the retailers I shop from, have access to a lot of information, but the targeted advertising I receive is still pretty dumb. The foot crumbs I leave as I go from site to site follow me with very generic suggestions but one has to wonder why those sites I trust, and I shop from more often do not have a profile for me.

Let’s take Amazon as an example of an online retailer as I shop with them consistently and I purchase a vast range of items for myself, my family, pets, and home. Amazon has the list of all my orders as well as everything I browsed and how many times I looked at an article without clicking “buy.” Why don’t I get an email with suggestions from the deal of the day that match my buying patterns? For instance, why don’t I get prompted for an item I looked at but did not buy? Chances are I did not do so based on price, so an offer might get me to finally purchase. Or again, why don’t I get offers on products upgrades? Say I bought a Ring doorbell two years ago and the latest model is now on sale. Why not send me offers for complementary products like what I might want to add to my smart home after buying several Echo products, a doorbell, and some bulbs?

Much of the same could be said for those brands I shop from on a regular basis and even more so those where I am part of a reward program. If I have a reward card in my digital wallet that pops up to tell me I am close to a specific store why doesn’t that retailer push it a step forward and send me relevant offers on what is available in store? Why aren’t traditional apparel retailer offering an in-store version of stitch-fix where based on your previous purchases and your body type they put together a number of outfits that on a specific date and time will be ready for you in a store changing room. You would walk in straight to the changing room you were notified on your phone as you entered the store, you try everything on, tap the RIFD tag of what you want to keep before you put it in a bag and walk out while your credit card is automatically charged. I give you that such a system might not be viable on a heavy traffic day, but hard to believe it would not work any other day.

I understand that much of the sales occurring on Black Friday end up being for items you had not planned to buy, but intelligent shopping does not mean that impulse shopping must die. It would actually mean you end up being more exposed to items you are likely to respond positively to resulting in more revenue for the retailer but also a much higher satisfaction on your part. At the end of the day, there isn’t much that is more satisfying than a successful shopping spree.

Mobility

Black Friday is such a big shopping day that stores have been opening earlier and earlier with some stores now starting their sales on Thanksgiving Day. I have gotten up earlier in the past mostly to avoid the crowds, but this year was not one of those times, and I had to make three attempts to reach the mall. That’s right only on Sunday I was able to get to the parking structure of the Westfield Mall in Santa Clara and park my car!! The first two times it was impossible to even get to the parking structure due to the high volume of cars.

So this begs the question: where were Lyft and Uber? In the land that invented ride-share and scooters, it seems to me that talking about the death of cars ownership was immensely premature. I understand of course that scooters might not be the safest choice when you are holding shopping bags, but why are people not relying on Uber and Lyft to avoid the pain of parking? I would guess that a lot has to do with how many of these malls treat rideshare services as second-class transportation providers and relegate them to drop off and pick up from locations that are less than ideal for both passengers and drivers. In my case a Lyft driver would either get stuck in the same traffic I was trapped in for over an hour or would have to drop me off miles away from the entrance.

Why are malls not keeping pace with what their customers want and offer preferential lanes and temporary parking spots for rideshare companies? Airports have adapted to this and while some airports still make you walk miles to get to a ride-share pickup location things are changing fairly quickly. Malls should learn from it especially in the US where parking is free. I can see other countries like the UK, where most shopping centers charge a fair bit to park, resisting such change as it would result in a loss of revenue.

 

What ruined my Black Friday could have been solved by technology today, not in some distant future. As I have often said though, technology might be ready, but business models and humans are not. Data and AI have the power to make my shopping much more tailored to my needs and ultimately more effective. This coupled with a pain-free rideshare trip to and from my favorite stores could have delivered a “shopping like a star”experience. But if the big parking structure that is being built next to the mall is a good indication of how quickly things will change I am sad to say it will be a while before retail catches up with what technology can already enable.

Robots Ready to Move Mainstream

Are the robots coming, or are they already here? Fresh off the impressive, successful Mars landing of NASA’s InSight lander robotic spacecraft, it seems appropriate to suggest that robots have already begun to make their presence felt across many aspects of our lives. Not only in space exploration and science, but as we enter into the holiday shopping season, their presence is being felt in industry and commerce as well.

Behind the scenes at factories building many of the products in demand this holiday season, to the warehouses that store and ship them out, robots have been making a significant impact for quite some time. Building on that success, both Nvidia and Amazon recently made announcements about robotics-related offerings intended to further advancements in industrial robots.

Just outside of Shanghai last week, at the company’s GTC China event, Nvidia announced that Chinese e-commerce giants JD.com and Meituan have both chosen to use the company’s Jetson AGX Xavier robotics platform for the development of next-generation autonomous delivery robots. Given the expected growth in online shopping in China, both e-commerce companies are looking to develop a line of small autonomous machines that can be used to deliver goods directly to consumers, and they intend to use Xavier and its associated JetPack SDK to do so.

At the company’s AWS:Invent event in Las Vegas this week, Amazon launched a cloud-based robotics test and development platform called AWS RoboMaker that it’s making available through its Amazon Web Services cloud computing offering. Designed for everyone from robotics students who compete in FIRST competitions through robotics professionals working at large corporations, RoboMaker is an open-source tool that leverages and extends the popular Robot Operating System (ROS).

Like some of Nvidia’s software offerings, RoboMaker is designed to ease the process of programming robots to perform sophisticated actions that leverage computer vision, speech recognition, and other AI-driven technologies. In the case of RoboMaker, those services are provided via a connection to Amazon’s cloud computing services. RoboMaker also offers the ability to manage large fleets of robots working together in industrial environments or places like large warehouses (hmm…wonder why?!)

The signs of growing robotic influence have been evident for a while in the consumer market as well. The success of Roomba robotic vacuums, for example, is widely heralded as the first step in a home robotics revolution. Plus, with the improvements that have occurred in critical technologies such as voice recognition, computer vision, AI, and sensors, we’re clearly on the cusp of what are likely to be some major consumer-focused robotics introductions in 2019. Indeed, Amazon is heavily rumored to be working on some type of home robot project—likely leveraging their Alexa work—that’s expected to be introduced sometime next year.

Robotics is also a key part of the recent renaissance in STEM education programs, as it allows kids of many ages to see the fun, tangible efforts of their science, math, and engineering-related skills brought to life. From the high-school level FIRST robotics competitions, down to early grade school level programs, future robotics engineers are being trained via these types of activities every day in schools around the world.

The influence of these robotics programs and the related maker movement developments have reached into the mainstream as well. I was pleasantly surprised to see a Raspberry Pi development board and other robotics-related educational toys in stock and on sale at, of all places, my local Target over the Black Friday shopping weekend.

The impact of robots certainly isn’t new in either the consumer or business world. However, except for a few instances, real-world interactions with them have still been limited for most people. Clearly, that’s about to change, and people (and companies) are going to have to be ready to adapt. Like the AI technologies that underlie a lot of the most recent robotics developments, there are some great opportunities, but also some serious concerns, particularly around job replacement, that more advanced robotics will bring with them. The challenge moving forward will be determining how to best use robots and robotic technology in ways that can improve the human experience.

PC Market Resilience

At the height of the PC revolution, the PC industry sold about 350 million PC’s a year. Those days are long gone, and unit sales of PC’s have declined ever since. But during the last 18 months, we have seen a slight uptick in PC demand, mainly from buyers looking for mid to higher priced laptops and Desktops. Some key industry players did not see this new demand coming, and in the case of Intel, they do not have the amount of mid to high-end processors available to meet the demand of some of their OEM’s this quarter.

While demand is slightly up, don’t be fooled by this uptick. Demand for PC’s continues to decline and will never really grow again and be as robust as it was a decade ago.

The chart below lays out the dim prospects facing the traditional PC market and shows by 2020 we will barely be selling 250 million PCs annually.

There are many reasons for the decline in PC demand, but two products appear to have had the most significant impact on many people, especially consumers.

Until the introduction of the smartphone, a PC was the only way people could gain access to the Internet, web browsers, email and any other type of digital material or apps they wanted to access and interact with digital content. And with the introduction of the iPad, the PC experience got even more portable.

It does not mean that demand for PC’s will ever stop completely. Indeed, it is still the most important productivity tool for business users and many consumers. But the smartphone and tablet can do about 70% of what a person can do on a laptop or PC in more portable settings, and these are becoming the most used personal computers people use every day of their lives.

What we are seeing is a lot of people using their PC’s for what we call “heavy computing” for creation, serious productivity and apps that want more horsepower and need a full blown PC or laptop to do specific tasks. A lot of people in business and some high-end consumers still need PC’s and laptops for the kind of job they do either daily or as the case with consumers, and whenever they need to do a task or project that requires more capabilities, then the can get with a tablet or a smartphone.

That is why, especially in the business sector, we are seeing a bit more of demand in mid to high-end laptops and computers and are willing to pay more for quality, speed, durability and with the goal of keeping them 4 to 5 years. High-end consumer demand somewhat tracks this same approach to buying more expensive laptops and PC’s, expecting to keep them for 4 to 5 years too.

But some of the OEM’s realize that users in business and consumer markets are using their smartphone more and more for productivity and see some of the new tablets, like Apple’s new 12.9” iPad Pro, even encroaching on laptop buyers who want performance but more portability.

I have talked to one major OEM who is thrilled that PC demand is slightly rising now, but their forecasters believe that within the next 2-3 years, we may be lucky if the PC Industry sells 225 million PCs by then.

But there is one buying group in the wings that are becoming a deep concern for traditional PC Vendors. That group is often called early Gen Z and ranges in ages between 14-18. If you have a kid in this age range, you already know that the smartphone is the center of their PC lifestyle.
These kids are highly tech literate and have even found out ways to be productive using just their smartphones.

While some of these kids have been using Chromebooks in their schools, others have used mainly iPad’s and tablets. But their primary computing tool is a smartphone. Even more important is their mastery of these smaller screen’s and its user interfaces. My two granddaughters school only uses iPads. They do all of their work assignment just on an iPad. I watched the kids in their school using the iPad to do their homework, read their textbooks, do presentations and fundamentally use it as their primary computing tool. Interestingly, in this case, since the iPhone mostly maps an iPad regarding UI and capability, some even do a part of their homework assignments on an iPhone.

With this in mind, I was privy to some early research on this issue with the goal of trying to find out what form factor this age group would likely demand when they enter the workforce. While the early research on this suggested they would not be opposed to using some form of a laptop, their preferred form factor is tending towards some tablet with keyboard design. It turns out, growing up using their smartphones as their most personal computer is influencing the type of computer they will want to use when they start their work careers.

Microsoft kicked off this concept with their original Surface tablet/keyboard product some years ago, and it turns out that the core buyers of this type of Surface computer are mostly people in millennial age ranges. Recently, Apple took a bold step to position their new 12.9 inch iPad as a potential laptop replacement. And we hear from ODM’s that at least two large laptop vendors have been asking for more Tablet Focused devices with built-in keyboards too.

While these 14-18 years olds are very smartphone-centric, I am sure you have seen kids as early as ten years old getting smartphones too. This entire generation will grow up using smartphones as their primary computer since it is the device they spend the most time with and use it for both consumption and various forms of productivity such as email, texting, and research.

PC vendors who only focus on laptops and even 2 in 1’s need to do more research on the younger Gen Z audience and study closely how they use their smartphones as their primary computer. My guess is that by the time they enter the workforce, traditional notebook designs will be their parent’s computer and they will want to use something very different than what is offered to most in the workforce today.

Podcast: Dell Analyst Summit, Citrix-Sapho, Nvidia Earnings, Dolby and Microsoft Headphones

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Dell’s Analyst Summit, the Citrix analyst event and their purchase of Sapho, Nvidia’s recent earnings, and the release of new noise-cancelling headphones from both Dolby and Microsoft.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

News that Caught My Eye: Week of Nov. 16, 2018

Citrix Acquires Sapho

Citrix announced on Thursday that it had acquired Sapho, a leading micro-app platform which it will use to enhance the guided work capabilities within Citrix Workspace, enabling people to work with even greater speed, intelligence, and simplicity.

Via Citrix

  • Sapho is very well known for delivering micro-apps for collaboration platforms such as Slack and Microsoft Teams
  • The micro-apps are linked to popular SaaS products such as Outlook, Google Drive, Salesforce, Concur and the allow for actionable tasks within those collaborations tools.
  • Citrix paid $200 million in an all cash deal for the startup that had raised just shy of $30 million since 2014 a good return for Sapho but a good investment for the future of Workspace for Citrix.
  • Citrix will integrate Sapho within its Workspace so as to streamline processes and workflows. For instance, a Concur micro-app allows a manager to approve or deny an expense report all without having to leave Workspace and launch Concur full app.
  • This seems like a very good fit given the overlap the two companies had in their client base and the request both companies were receiving from customers for more integration.
  • Many companies will appreciate the integration of Sapho’s micro-apps into Workspace purely from a productivity perspective. However, users will also see a way to bridge old and new by bringing legacy apps that might have lost some
    “sexiness” into a much more modern workflow that will appeal to the growing numbers of the younger workforce.

Facebook: Delay, Deny and Deflect

 In a devastating article, the New York Times, citing more than 50 sources, accused Facebook of:

  • employing a Republican opposition research firm to “discredit activist protesters,” in part by linking them to the liberal billionaire George Soros;
  • using its business relationships to lobby a Jewish civil rights group to flag critics and protesters as anti-Semitic;
  • attempted to shift anti-Facebook rhetoric against its rivals to soak up the blame by planting stories with reporters;
  • posting “less specific” carefully crafted posts about Russian election interference amid claims that the company was slow to act;
  • and urging its senior staff to switch to Android after Apple chief executive Tim Cook made critical remarks about Facebook’s data practices

Via The New York Times 

  • Over the past year, I have discussed the Cambridge Analytica as well as Mark Zuckerberg and Sheryl Sandberg’s trips to Washington several times. In my assessment, there were always two constants: one that the business model called for growth and more growth and two that Zuckerberg just does not seem in touch with his own company or humanity
  • After this week, and especially after today’s call with the press, I am even more convinced that those two points are true but a few more were added.
  • As you can imagine, Facebook denied that much of what the New York Times reported was true. Yet, the call itself was the best example of delay, deny and deflect.
  • Deny the story
  • Deflect by announcing that Facebook appointed a new independent council to deal with appeals to decisions on content. It seems that this council was thrown together in a rush and has no real power to actually change the course of things.
  • Delay responsibility as Mark says that users are pretty clever and they should figure things out by themselves. This in particular seems to be missing the very point of why fake news, propaganda and the rest worked. Users are not smart enough to figure out whether what they read is true or not. It also seems like Facebook is throwing in the towel and putting the burden on the victim. It is like saying I cannot hold on to this dog but I trust you can run fast enough not to get bitten! It just does not work like that. The responsibility must be on Facebook
  • As I always say, it is not the incident that will get you to lose customers but the way you respond to it. With Facebook users might have moved on from the Cambridge Analytica story, but management lack of leadership, their little willingness to be held accountable have done nothing to reassure users that things will be different in the future.
  • Facebook’s leadership might just see #deletefacebook as an annoyance for now, but I do wonder if the biggest impact this whole situation will have on the business is going to be a lack of trust from advertizers in the company’s management to resolve it, but most importantly in their ability to recover from it and move the business forward.
  • The board continued support for Zuckerberg and Sandberg might also start to concern Washington who has started asking questions before the Midterms and will only escalate as we get closer to the 2020 Presidential elections.

Google Maps New Features

Last year Google enabled users in select countries to message businesses from the Business Profiles on Google. Sending messages to businesses gives you the opportunity to ask questions without having to make a phone call. Now you’ll see your messages with the businesses you connect with via Business Profiles within the Google Maps app, where you’re already looking for things to do and places to go or shop.

Via Google 

  • Google announced that this feature will be coming last spring at Google I/O and I thought it made a great deal of sense.
  • Keeping users inside the app to get more information makes perfect sense and in a world where it seems we all prefer to message than talk so does letting you message the store to save you a call.
  • I also like the thinking behind keeping messaging businesses separate from personal messaging so there are no concerns about accidentally messaging a business rather than a personal contact
  • Of course, Google wants you to message rather than call not just cause they want to make your life easier. If you call the business there is no direct way for Google to use the information you share to improve the service, something they can do with messaging. Think for instance if you are asking about whether or not there is parking at a store. Once you get your answer that information could be added to maps for other users to know without having to ask.
  • All that said, there is a concern that I share with some who have reviewed the new feature that maps is turning into a huge source of information at the expense of the very thing they were designed for: navigation
  • Users can find the new richness of Maps overwhelming. More importantly, with more information displayed on the map core information like street names have less space making reading the map harder.
  • It will be very interesting also to see how this feature is enabled within Android Auto. I would expect that for legal reasons that messaging will only be supported by voice which might lead some users to revert to calling the business for a more straightforward exchange
  • It is always hard to find the right balance between offering more information and keeping things visually simple for straightforward navigation and this is especially true with maps. I have always been a fan of Google maps because it gives me more so I am eager to test if more is just becoming too much!

 

 

Amazon’s HQ2 Search Has Catalyzed An Important Thought Process

In the end, Amazon’s announcement of Queens and Crystal City as the two new HQ locations was anti-climactic. In part because of the decision to split HQ2 after a year of breathless media coverage, and in part because many believe that the D.C. area choice was already a foregone conclusion given Jeff Bezos’ already substantial ties to the region. Cynics believe that the choice was typically Amazonian – where the company could get the best deal, rather than it necessarily being the best location for the company or, heaven forbid, embracing something slightly more altruistic by helping an up-and-coming tech city like Pittsburgh or Atlanta move to the top tier.

More importantly, this highly public process prompted a year-long thought process of how cities must compete for business and talent in the 21st century economy. The Bay area and Seattle are already overheated, and the infrastructure (affordable housing, transport) to support much more is inadequate. In my hometown of Boston, which was one of the finalists, there was a collective feeling of relief at having not been selected, given already sky-high housing prices, clogged roads, and an overburdened public transport system. And it is both sad and a poor reflection on our current leadership when it takes the prospects of an Olympics or major new corporate headquarters to catalyze the type of strategic thinking and investment for any city that wants to be competitive in the 21st century economy.

In my view, there are five key elements necessary to be in the game:

  • Talent. Both extant and potential via a good educational system and strong universities.
  • Diversity of economy. You don’t want to be too dependent on any one industry or economic sector. Pittsburgh is a great example. Whereas the collapse of the steel industry nearly killed Pittsburgh 1.0, Pittsburgh 2.0 has a much greater diversity of vibrant industries, fueled by a unique level of cooperation among its private, public, and educational sectors.
  • Infrastructure. An adequate road system and a viable public transportation system. It’s becoming clear that 21st century workers don’t want to spend their lives sitting in cars. For certain types of employers/employees, proximity of a decent airport is also a factor.
  • Affordable Housing & Livability. If you’re earning $100,000 and can’t afford a pleasant one-or two- bedroom apartment/condo in the city or a modest home in a close-in suburb, it’s a problem. There’s also the slightly more amorphous concept of ‘livability’, such as a city’s walkability, and the presence/proximity of culture and other amenities. Something I’ve always thought is important is ‘what’s a tank away’?, in other words are there nice places you can easily get to for a day trip or a weekend (beach, mountains, etc.).
  • Progressive Local Leadership. Given the dysfunction on the national level and lack of strategic, long-term investment in education and infrastructure, cities and states with strong local leadership are breaking through. Examples: Nashville, Tulsa, and Los Angeles.

Now, let’s take a look at a few of the cities that were not only finalists in the Amazon hunt, but would be viable contenders at least for the ‘next Amazon’. How do they stack up on the above criteria?

Boston. Educational institutions, diversity of economy, livability, and talent are its strengths. But the city’s housing prices have become Bay-area-esque, and its infrastructure is overburdened and crumbling, with no long-term plan in place. It’s like a city that’s over-touristed and is saying, ‘no more’.

Dallas. Yes, it’s economy is on fire, but my sense is that this is a place that people move to once their career is established rather than being a preferred location for younger talent. And while it’s affordable compared to a lot of other cities, Dallas lacks top tier educational institutions that are feeders to tech companies, and remains too auto-centric, despite some recent investments in public transport.

Austin. This city has a lot of the right ingredients in place and has attracted a lot of tech companies already. A much younger, more vibrant feel than Dallas, in part because of the giant University of Texas at Austin. But growth has outpaced infrastructure investment, with sprawl and traffic impacting the quality of life factors that made this city attractive after Dell helped put it on the map.

Atlanta. Has many of the same attributes and challenges as Dallas, but is a notch above in terms of top tier educational institutions. I think traffic/sprawl/infrastructure challenges are what kept it out of the running for Amazon.

Pittsburgh. Here’s a city that has done and is doing a lot of things right to become a 2.0 version of itself. A quite livable place. Not yet a major league city on a global scale, and needs to substantially invest in its transport infrastructure if current growth trajectory continues.

Nashville. Has a lot of the same ingredients as Pittsburgh, and has used its assets to become a major healthcare/tech center. A progressive mayor has courted companies and made the right investments and strategic decisions to make the city much more livable (new park & bike trails, better roads/transport, tons of new housing).

Los Angeles. The highly progressive mayor Eric Garcetti is making huge investments in infrastructure and affordable housing and doing real things to address the homeless issue, tackle inequality, and diversify the economy. This fascinating, diverse place has the potential to be a revitalized global city for the 21st Century…or it could get crushed under the weight of its size and years of under-investment.

I should also mention three Canadian cities that are already seriously on the map:

Toronto. Now North America’s third largest city, Toronto (and the tech epicenter of nearby Waterloo) was a contender for Amazon HQ. And not just because of the idea of ‘Trump Snub’ that the media loved writing about. This city is culturally and economically diverse, has strong educational institutions, and is very livable. It does suffer from U.S.-like problems of traffic and sprawl and inadequate rapid transit outside the city core. And housing prices are among the highest in North America. But you will be hearing more about Toronto in the coming years.

Vancouver. Incredible quality of life, if you can get past the Seattle-esque six months of gloominess. This will be a major 21st century city, given its setting, strong educational institutions, and diversity. The huge run up in real estate prices (mainly due to foreign investment) and lack of good rapid transit are challenges…that are actually being addressed. V

Montreal. This city has all the right ingredients: already a tech and creative hub, strong educational institutions and tons of talent, still relatively affordable, and a high quality of life. Montreal is also making a significant investment in improving its roads and expanding its transport system. Its brutal winters are a factor for some, and still restrictive language laws keep some companies (and people) away.

And finally, here’s a few more from among the major North America cities:

Getting to the Next Stage: Minneapolis-St. Paul, Portland (OR), Phoenix, Detroit, Philadelphia.

Not Progressing/Worry Button: Chicago, Miami, Orlando, Charlotte, Baltimore…and San Francisco.

Cities shouldn’t be overdoing the post-mortem about why they didn’t get Amazon HQ2. Instead, they should be thinking about what’s needed for them to land the ‘next HQ’.

Dolby Brings a New Dimension to Home Entertainment

Consumers are very familiar with the Dolby brand. Whether you often visit a Dolby Cinema or you have a TV or computer that supports Dolby Atmos and Dolby Vision you know Dolby delivers one of the best entertainment experiences that allow you to lose yourself in the content you are consuming.

At a time when delivering an experience has more and more to do with the combination of hardware, software and AI, Dolby brings to market its first consumer device: a set of wireless headphones called Dolby Dimension.

Making Hardware Does not Make You a Hardware Maker

It is always easy when a brand brings to market a product in a category they had not been present before to think of it as “entering the market.” Of course, technically this is what they are doing. But there are different reasons why a brand decides to get into a new space. Potential revenue is mostly at the core of such a move, but even then how that revenue is generated differs. Sometimes revenue comes directly from the new product. Other times, the revenue upside comes from how the product is able to boost brand perception in areas the name was already present.

When Dolby spoke to me about Dolby Dimension, I thought about how well it fits their history and DNA as well as delivering on a market need. To understand why Dolby is taking this step one should take a quick look at how home entertainment is changing.

In a recent study across 2044 consumers by the Hollywood Reporter it is clear that in the US, binge-watching is becoming the norm and not just for millennials. While 76% of TV watchers aged 18 to 29 said, they preferred bingeing, older age brackets are not far behind with 65% of viewers ages 30 to 44, and 50% of 44 to 54 who prefer binging. And it is not just about how many people binge-watch it is also how often they do so. Across the national sample of the October study, 15% say that they binge-watch on a daily basis. Another 28% say they binge several times per week.

Many will argue that the wireless headphones market is already super competitive and that Bose fully controls the high-end of the market, so Dolby should have thought about it twice before entering this space. But see, this is where the “entering this space” debate starts. From how I look at it, Dolby was looking to expand the way their technology and experience can be experienced. This took the form of a set of headphones that bring value to a specific set of consumers who appreciate high-quality sound, spend hours watching content on whatever screen is most convenient in their home and see the $599 price tag as an investment in a superior experience that allows them to binge smarter.

It is when you look at the technology packed inside Dolby Dimension and the specific use cases that Dolby has in mind that you understand why this is not a simple branding exercise. The initial limited availability to the US market and distribution focused on dolby.com confirm to me that Dolby is not interested in a broader consumer hardware play, which I am sure will leave hardware partners to exhale a sigh of relief.

Not Just Another Set of Wireless Headphones

Most wireless headphones are designed today for users on the go. They help you being immersed in your content or your work by isolating you from the world around you thanks to noise canceling.

There are some models in the market, the latest one being the Surface Headphones, that allow you to adjust your voice canceling feature to let some of the world in if you need to. This is however done manually.

Dolby Dimension is designed with home use in mind which means that a few things are different. First, the new Dolby LifeMix technology allows you to dynamically adjust how much of the outside world you can let it. Settings, activated through touch controls, will enable you to find what Dolby calls the “perfect blend” between your entertainment and your world as well as entirely shutting down the outside world through Active Noise Cancelling. If you, like me, binge-watch in bed at night you might appreciate being able to choose between being fully immersed in your content when your other half falls asleep before you and snoring gets in the way. Other times, you might want to be able to hear your daughter giggling away next door because she decided to ignore your multiple lights off requests!

Over the days I had to play with Dolby Dimension what most impressed is how it really gives you the feeling of sitting in a theatre. This is especially striking when you are watching content on a small screen like a phone or a tablet. The sound, which of course Dolby will tell you is half the experience, really brings that little screen to life letting you enjoy content at its best. I felt so immersed in what I was watching that I am pretty sure I got to experience the kind of “mom’s voice canceling” my kid has naturally built into her when she is watching any of the Avengers movies, or she is gaming!

There are a few more details that highlight what Dolby had in mind with these headphones. Dolby Dimension can be paired with up to eight devices, and you can quickly toggle between your favorite three with dedicated hardware keys on the right ear cup. When you pick your device, hitting the key associated to it will take you straight to your entertainment app of choice like Netflix or Hulu, not just your device.

Battery life reflects a best-sound approach by delivering up to 10 hours with LifeMix and Virtualization turned on and up to 15 hours with low power mode. So whether you, like 28% of the study sample, binge-watch two to three episode per session or like another 21% you watch four episodes at once you will be left with plenty of power. While we might be tempted to think about a long flight or a day at the office, this is not what Dolby Dimension was designed for and to be honest if those are your primary use cases Dolby Dimension is not really for you.

Headphones are Key to the Experience

It is fascinating how over the past year, or so, headphones have become a talking point in tech. I think the last time that was the case was when Bluetooth was introduced and we got excited about being able to have a conversation on the phone without holding the phone.

When we are discussing the lack of the audio jack from our devices or which digital assistant is supported (assistant that you can summon with Dolby Dimension) we are pointing to the fact that headphones have become an essential part of our experience. Considering how much time we spend in front of one screen or another, both at home or on the go, being able to enjoy both visual and audio content is growing in importance. As intelligence gets embedded in more and more devices and smaller and smaller devices benefit from higher processing power, headphones can become a device in their own right rather than being viewed merely as an accessory.

While I don’t believe Dolby is interested in becoming a consumer hardware company, I am convinced they will continue to innovate and look at how consumers habits are changing when it comes to consuming content. As we move from physical screens to augmented reality experiences and eventually virtual ones, Dolby might continue to take us on a sensory journey through technology and if needed hardware.

Chiplets to Drive the Future of Semis

A classic way for engineers to solve a particularly vexing technical problem is to move things in a completely different direction—typically by “thinking outside the box.” Such is the case with challenges facing the semiconductor industry. With the laws of physics quickly closing in on them, the traditional brute force means of maintaining Moore’s Law, by shrinking the size of transistors, is quickly coming to an end. Whether things stall at the current 7nm (nanometer) size, drop down to 5nm, or at best, reach 4nm, the reality of a nearly insurmountable wall is fast approaching today’s leading vendors.

As a result, semiconductor companies are having to develop different ways to keep the essential performance progress they need moving in a positive direction. One of the most compelling ideas, chiplets, isn’t a terribly new one, but it’s being deployed in interesting new ways. Chiplets are key IP blocks taken from a more complete chip design that are broken out on their own and then connected together with clever new packaging and interconnect technologies. Basically, it’s a new version of an SoC (system on chip), which combined various pieces of independent silicon onto a multi-chip module (MCM) to provide a complete solution.

So, for example, a modern CPU typically includes the main compute engine, a memory controller for connecting to main system memory, an I/O hub for talking to other peripherals, and several other different elements. In the world of chiplets, some of these elements can be broken back out into separate parts (essentially reversing the integration trend that has fueled semiconductor advances for such a long time), optimized for their own best performance (and for their own best manufacturing node size), and then connected back together in Lego block-type fashion.

While that may seem a bit counter-intuitive compared to typical semiconductor industry trends, chiplet designs help address several issues that have arisen as a result of traditional advances. First, while integration of multiple components into a single chip arguably makes things simpler, the truth is that today’s chips have become both enormously complex and quite large as a result. Ensuring high-quality, defect-free manufacturing of these large, complex chips—especially while you’re trying to reduce transistor size at the same time—has proven to be an overwhelming challenge. That’s one of the key reasons why we’ve seen delays or even cancellations of moves to current 10nm and 7nm production from many major chip foundries.

Second, it turns out not every type of chip element actually benefits from smaller sizes. The basic argument for shrinking transistors is to reduce costs, reduce power consumption, and improve performance. With elements like the analog circuitry in I/O components, however, it turns out there’s a point of diminishing returns where smaller transistors are actually more expensive and don’t get the performance benefits you might expect from smaller production geometries. As a result, it just doesn’t make sense to try and move current monolithic chip designs to these smaller sizes.

Finally, some of the more interesting advancements in the semiconductor world are now occurring in interconnect and packaging technologies. From the 3D stacking of components being used to increase the capacity of flash memory chips, to the high-speed interfaces being developed to enable both high-speed on-chip and chip-to-chip communications, the need to keep all the critical components of a chip design at the same process level are simply going away. Instead, companies are focusing on creating clever new ways to interconnect IP blocks/components in order to achieve the performance enhancements they used to only be able to get through traditional Moore’s Law transistor shrinks.

AMD, for example, has made its Infinity Fabric interconnect technology a critical part of its Zen CPU designs, and at last week’s 7nm event, the company highlighted how they’ve extended it to their new data center-focused CPUs and new GPUs now as well. The next generation Epyc server CPU, codenamed “Rome,” scheduled for release in 2019, leverages up to 8 separate Zen2-based CPU chiplets interconnected over their latest generation Infinity Fabric to provide 64 cores in a single SoC. The result, they claim, is performance in a single socket server that can beat Intel’s current best two-socket server CPU configuration.

In addition, AMD highlighted how its new 7nm data center-focused Radeon Instinct GPU designs can now also be connected over Infinity Fabric both for GPU-to-GPU connections as well as for faster CPU-to-GPU connections (similar to Nvidia’s existing NVLink protocol), which could prove to be very important for advanced workloads like AI training, supercomputing, and more.

Interestingly, AMD and Intel worked together on a combined CPU/GPU part earlier this year that leveraged a slightly different interconnect technology but allowed them to put an Intel CPU together with a discrete AMD Radeon GPU (for high-powered PCs like the Dell XPS15 and HP 15” Spectre X360) onto a single chip.

Semiconductor IP creator Arm has been enabling an architecture for chiplet-like mobile SoC designs with its CCI (Cache Coherent Interconnect) technology for several years now. In fact, companies like Apple and Qualcomm use that type of technology for their A-Series and Snapdragon series chips, respectively.

Intel, for its part, is also planning to leverage chiplet technology for future designs. Though specific details are still to come, the company has discussed not just CPU-to-CPU connections, but also being able to integrate high-speed links with other chip IP blocks, such as Nervana AI accelerators, FPGAs and more.

In fact, the whole future of semiconductor design could be revolutionized by standardized, high-speed interconnections among various different chip components (each of which may be produced with different transistor sizes). Imagine, for example, the possibility of more specialized accelerators being developed by small innovative semiconductor companies for a variety of different applications and then integrated into final system designs that incorporate the main CPUs or GPUs from larger players, like Intel, AMD, or Nvidia.

Unfortunately, right now, a single industry standard for chiplet interconnect doesn’t exist—in the near term we may see individual companies choose to license their specific implementations to specific partners—but there’s likely to be pressure to create that standard in the future. There are several tech standards for chip-related interconnect, including CCIX (Cache Coherent Interconnect for Accelerators), which builds on the PCIe 4.0 standard, and the system-level Gen-Z standard, but nothing that all the critical players in the semiconductor ecosystem have completely embraced. In addition, standards need to be developed as to how different chiplets can be pieced together and manufactured in a consistent way.

Exactly how the advancements in chiplets and associated technologies relate to the ability to maintain traditional Moore’s law metrics isn’t entirely clear right now, but what is clear is that the semiconductor industry isn’t letting potential roadblocks stop it from making important new advances that will keep the tech industry evolving for some time to come.

Podcast: Samsung Developer Conference, AMD 7nm, Google Policies

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Samsung’s Developer Conference and the announcements around their Bixby assistant platform and Infinity Flex foldable smartphone display, AMD’s unveiling of their 7nm Epyc CPU and Instinct GPU for the cloud and datacenter market, and Google’s recent internal policy changes on harassment and other issues.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast