Rethinking Smart Home Gateways

It’s the one piece of tech equipment we all have and use almost every day, yet, know very little about.

I’m talking about your home broadband router/gateway. The non-descript black box likely installed somewhere in your abode by your cable or satellite service provider. In the early days of internet access, gateway boxes were often separate from both the TV service set-top box and a Wi-Fi router but today, it’s typically all consolidated together into a single unit.

Very few people ever dive into the details of a gateway/router’s operation. The few who do are typically greeted with an arcane, browser-based interface loaded with networking buzzwords and engineering jargon. Most people view it as a set-and-forget, utility-focused device, not unlike the gas and power meters that denote where electricity enters your home.

Yet, as we transition into a world where more and more connected devices are moving into our homes and our dependence on various internet-based services continues to grow, it’s becoming increasingly clear there needs to be a radical rethinking of what these devices do and, more importantly, how they operate.

Imagine this: a device that could leverage either its own sensors or even data from your other devices to physically map out your home, then show you where the connected devices are and what they’re doing. The device could potentially do this by either having you take a few pictures, shoot some video, or leverage the kinds of 3D-depth sensors that are being leveraged for augmented and/or virtual reality-based products.

Even better, it should be able to physically and visually map out things like WiFi (or cellular) strength in different parts of your house. Because those can change, it should be able to do that on a dynamic basis. In fact, smart service providers could even leverage the data to suggest things like adding a WiFi extender for your upstairs bedroom or the basement office.

On top of basic service quality questions, a redesigned gateway experience should be able to answer questions about why a particular service or device isn’t working. It’s not just that it has connectivity, but is it getting the kind of messages/data it should? Admittedly, this one would take a bit more standardization work because there would have to be agreement on devices sending out messages saying “this is what I need” and verification they were receiving it. Imagine having that capability—in plain English—in terms of helping people troubleshoot some of their common internet access-related issues.

Another critically important capability that could build on these network traffic analysis skills would be related to privacy and security. Wouldn’t you like to know what kind of data is flowing into and out of your home? Again, this would have to be translated into understandable terms—which is challenging to do—but it could be very useful. Plus, intelligence built into the gateway/router could watch for and block potential security issues and could even be kept constantly up-to-date by leveraging some of the new pattern matching-based deep learning tools that are becoming available.[pullquote]Instead of being relegated to a corner and untouched, a truly smart home gateway should be at the very forefront of any consumer’s tech experiences at home. [/pullquote]

As new devices and/or services are added to your home technology arsenal, this rethought gateway should leverage its visual map to show you where the new device and/or services are running, what other devices they may or may not be connected to, and whether or not they’re working properly. It’s easy to lose track of all the devices and services that people are adding and removing, so the gateway could also serve as a technology inventory that tracks everything and potentially reminds you to update, replace, or even pay for any used services.

Given the critical role for-pay services are likely to have in future smart home and consumer Internet of Things (IoT) applications, this last capability could be a lot more important than it first appears.

The bottom line is that, instead of being relegated to a corner and untouched, a truly smart home gateway should be at the forefront of any consumer’s tech experiences at home. Unfortunately, I’m not aware of any vendors with products close to coming to market with the kinds of capabilities I’ve described, so this may all be a pipe dream for a while. I’m convinced, though, that the right kind of design and user experience could turn home gateways from basic necessary evils into the visual centerpiece of our future connected homes.

(I’ll be the chairman of Day 1 of the Smart Home Summit to be held in Palo Alto, CA on November 1, 2016. If you’d like to find out more about the event, you can click here.)

Podcast: IFA 2016

In this week’s Tech.pinions podcast, Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss the IFA consumer electronics show in Berlin, analyze many of the product announcements made there, and talk about the Samsung Galaxy Note 7 recall.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Ridesharing Impact Dramatically Overstated

Amongst the many car tech-related developments, few are receiving more attention than ridesharing services such as Uber and Lyft. To their credit, these companies have managed to build up enormous valuations and continue to receive major investments from traditional auto industry players.

The primary reason for all the excitement isn’t their current business model—providing convenient means to transport people from place to place—but the bigger vision they have of turning cars into a service. The idea is that instead of buying and owning cars, people will turn to this type of automotive service for all their transportation needs, thereby dramatically reshaping the automotive industry and our transportation infrastructure.

In many ways, the concept is certainly an intriguing and vaguely appealing one. In fact, many in the tech and urban-focused Silicon Valley area seem to accept it as a foregone conclusion. According to a recent TECHnalysis Research survey of 1,000 US consumers about automotive related technologies, including autonomous driving and ridesharing, however, reality is different. In fact, dramatically so.

To participate in the survey, consumers had to both own a car and expect to purchase one over the next two years. While this precludes representation from people who may have already given up on owning a car (likely a tiny percentage anyway), their impact has already been factored into current car sales. The purpose of this portion of the study was to better understand any potential future impact of ridesharing on car purchases.

The first key point that the study uncovered is that a huge percentage of the US population still has little to no experience with any ridesharing service. In fact, about 57% have never used a ride-sharing service and another 23% have only used one once or twice. This puts rideshare users in the minority. In fact, only about 20% are regular or semi-regular users of ridesharing. The top-level results are shown in Figure 1. (Note that the numbers don’t add up to 100% due to rounding.)

Ridesharing Usage, Figure 1

 

Fig. 1

Not surprisingly, the numbers vary by home location, with 31% of city dwellers, 17% of suburbanites and 9% of rural dwellers saying they use ridesharing at least once or twice a month. (FYI, city residents make up roughly 1/3 of respondents, suburban residents approximately ½, and rural residents about 1/5—a very similar mix to overall US census data.) The numbers also vary by age, with 32% of those under 35 using it at least once or twice a month.

More importantly, even among those who use ridesharing services, the reasons or situations in which they do use them strongly suggest occasional, supplemental usage to their regular driving. In fact, nearly 75% of ridesharing users view it as a supplemental service for situations such as after drinking, while travelling, or other circumstances where they don’t have access to a car. That leaves just 5% of the total population (or one quarter of all rideshare users) who actually use ride sharing more than just occasionally.

As Figure 2 illustrates, just 4% of all ridesharing users (6% of city-dwelling users) see their cars as either potentially or definitely being replaced by ridesharing users—less than 1% of the total survey respondent base.

Ridesharing Usage, Figure 2

 

Fig. 2

Survey respondents were also asked to rate the amount of influence that ridesharing services would have on their next car purchase (from 0% influence or no impact, to 100% influence or will definitely not buy a car because of them). The results showed that just 8% of the overall total said it would have a strong or greater impact (defined as at least a 50% impact), with 14% of city dwellers, 4% of suburbanites and 10% of rural dwellers selecting those same options. One interesting point to note is that not a single suburbanite said they would definitely not purchase a car because of ridesharing services.[pullquote]Ridesharing services have had an impact on the way that people think about cars, car ownership, and transportation in general. But their influence on new car purchases is likely to be extremely small for some time to come.”[/pullquote]

One last point the survey also explored was the potential impact that autonomous vehicles would have on ridesharing services. The largest group of respondents, 40%, said if ridesharing services started using autonomous vehicles, it would not impact their usage of the services. (FYI, only those who had used ridesharing services were asked the question). However, a combined 43% said they would either use ridesharing a little less, a lot less or completely stop using them if autonomous vehicles were put into service. Interestingly, the results were relatively consistent across the different location-based groups, but city dwellers actually had the highest percentage of respondents (15%) who said they would stop using the services completely if autonomous cars were deployed.

There’s no question that ridesharing services have had an impact on the way that people think about cars, car ownership, and transportation in general. But as the survey results dramatically illustrate, their influence on how, why, or if people actually purchase new cars is likely to be extremely small for some time to come. It’s certainly possible that we’ll see some dramatic changes in opinion over the course of the next several years, but for now, the obsession over “cars as a service” is definitely misplaced.

(Last week’s column covered additional data from the survey, including consumers’ interest in autonomous driving features and electric/hybrid cars, as well as factors influencing car purchases.)

Consumer Interest in Auto Tech? Slower Than You Think

The industry announcements around auto-related technologies such as autonomous driving, electric/hybrid vehicles, ridesharing services and more have been coming so fast and furious lately that it seems to be a foregone conclusion that everyone wants all this stuff.

The reality, however, isn’t that clear.

In order to gauge real consumer interest, TECHnalysis Research recently conducted a survey of 1,000 US consumers who currently own a car and are planning to purchase one over the next two years. The results paint a fascinating picture of the things that matter most to them and those that are less important. The bottom line? There are definitely pockets of interest, but they don’t neatly line up with the various stories and agendas that members of the tech and automotive-related industries are currently trying to foist onto our collective consciousness.

First, it’s clear that consumers are moving at a slower pace towards technology adoption than many in the industry are willing to admit. Though interest in automotive technologies is growing, the influence of these technologies on car purchase priorities is still small. Survey respondents were asked to rank their purchase priorities on both the car they currently own and the one they are planning to buy next. From a list of 12 characteristics, such as price, looks, car size/type, performance, etc., autonomous driving features and an electric/hybrid drivetrain ranked dead last, even for future purchases. Figure 1 highlights the top-level results (note that a lower number means a higher priority).

Connected Car Survey, Figure 1

Second, many consumers seem content with getting modest, incremental technology improvements in their future car purchases. Unlike tech-centric markets, such as smartphones, where people are starting to wait for fairly dramatic feature and technology advancements before they make a new purchase, new car buyers are looking for simpler changes. One of the biggest changes in priority between a consumer’s previous car and their next, for example, is an increased interest in in-car WiFi connectivity. Even still, as Figure 1 shows, that technologically simple capability only ranked 8th on the list of 12 characteristics for future car purchases (and 9th for existing car purchases).

Third, some of the tech-related features that are of interest are focused on safety. For example, in a ranking of seven different types of autonomous features, the clear favorites were blind spot warning with automatic intervention, followed by smart front and rear cameras that tied to automatic braking systems. These are great new technologies to be sure, but not the whiz-bang autopilot-type modes that many automobile and tech companies are focused on.

Figure 2 shows the complete rankings for autonomous features (again, a lower number means a higher ranking).

Connected Car Survey, Figure 2

In terms of bigger picture automotive technology, it’s also clear that there’s more interest in autonomous features than there is electric or hybrid drivetrains. As Figure 3 illustrates, in virtually every age and gender breakdown, the percentage of those who expressed a modest or greater interest in a car that offers some autonomous driving features is higher than the percentage of those from the same group who wanted a car with either an electric or hybrid gas/electric engine and drivetrain.

Connected Car Survey, Figure 3

This is a bit challenging to current industry narratives, because many of the more advanced autonomous capabilities seem to be tied to electric cars. The type of advanced features found in Teslas and expected Tesla competitors, for example, are only available with electric engines. Consumer sentiment, however, would suggest that automakers and car suppliers should think about bringing more autonomous features to traditional gas-powered combustion engines.[pullquote]Consumer sentiment would suggest that automakers and car suppliers should think about bringing more autonomous features to traditional gas-powered combustion engines.[/pullquote]

One interesting characteristic of both autonomous driving-related features and electric/hybrid engines is that they are very divisive among consumers. For instance, autonomous features had both the highest percentage of consumers showing some level of interest in them (as shown in Figure 3), as well as the highest percentage of consumers who said they have absolutely no interest in those features. Electric/hybrid engines ranked seventh on a separate list of 11 automotive technologies that respondents rated in terms of importance (pure electric engines were last on the list), but hybrid engines actually received the most #1 votes in that poll at 19%. However, because hybrid engines also had so many lower ratings, their overall average number was brought way down.

These survey results reflect the challenges that automakers, suppliers, and tech companies are going to have in trying to make more advanced automotive technologies mainstream. There are clearly some consumers who are eager and excited to get into these new tech-savvy cars, but based on real-world feedback, it looks to be a very modestly-sized group.

Next week I’ll dig into the impact (or not) of ridesharing services on car usage and potential purchases.

Podcast: Intel IDF

In this week’s Tech.pinions podcast Tim Bajarin, Ben Bajarin and Bob O’Donnell analyze Intel’s IDF developer event and related announcements, including their agreement to build ARM-based chips, as well as new efforts in AR, VR, Deep Learning, 5G and autonomous cars.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Utility of Cloud Computing

From a technology perspective, the idea of delivering computing services from the cloud has gone mainstream. Every day, it seems, we end up hearing about or interacting with a new service or app that gets its capabilities from the ephemeral and, frankly, sometimes baffling idea of computers in the sky.

Well, OK, not exactly—advanced computing topics aren’t always known for their precision of language and clarity of meaning—but we all do use lots of online resources that are powered by servers and other computing devices that we can’t see or touch.

For consumers, these types of cloud computing-driven interactions are becoming regular and commonplace. Looking for transportation? Hail a ride from Uber or Lyft. Settle a debate? Ask your question of Siri, Cortana, Google Now or other personal assistants. Listen to your favorite tunes? Fire up Spotify, Pandora, Tidal or a host of other choices.

Businesses can also leverage cloud computing based services from the likes of Salesforce, DropBox, and hundreds of other companies. There’s also a rapidly growing business in offering cloud computing itself as a service from companies like Amazon, Microsoft, and Google.

In all cases, the idea is to leverage a seemingly inexhaustible supply of computing power, storage space, and fast network connection pipes to deliver computing as a utility, much like power companies deliver electricity to all our homes and businesses.

Web-based companies, like the ones mentioned above, are writing software to take advantage of this new utility in a way that allows them to run services on top of this infrastructure and build a business model around them.

Traditional businesses, however, have been much slower to move to this new flexible, but often technically challenging, type of computing. Oh sure, there’s been a lot of talk about creating “private clouds” (companies build their own web-like computing infrastructure, leveraging the same kinds of tools and methods used for the public internet but keeping everything inside their own walls), or “hybrid clouds”, which mix some elements of “private clouds” with “public clouds” hosted out on the Internet. In reality, however, adoption of these new concepts has moved slower than many initially expected.

The reasons for these delays are many. First, there is the basic question of trust. Many companies have been very leery of letting their digital crown jewels outside the walls of their organization. Not as widely discussed, but equally problematic, is the issue of job security. If projects that IT used to manage are being handled by outside cloud companies, won’t that reduce the need for some IT jobs?

Another big issue is technical complexity and limited skill sets. Many cloud computing concepts, tools, structures and methodologies can be very challenging, and traditional business IT departments simply don’t generally have enough people with the capabilities to do the work. (Of course, this relates back to job security as well.)

As time has passed, however, many businesses are starting to recognize that their fears were either unfounded or not as troublesome as they first thought. In the case of trust and security, for example, it’s becoming increasingly clear that companies who specialize in cloud computing are so highly focused on security, that they’re probably going to have a safer environment than a company’s own network.

We’ve also seen the rise of companies like Rackspace and other managed service providers that can help companies who don’t necessarily have the in-house expertise to make the transition to cloud-based computing services.

The net result is that we’re turning the corner on cloud computing models becoming mainstream options in traditional businesses as well. This represents a significant sea change that’s likely to have important repercussions within the overall business computing environment for many years to come.

On one hand, the improved flexibility that the dynamic, quickly evolving cloud computing methods can bring to businesses should help them in a number of areas, from delivering mobile versions of custom business applications more rapidly, to integrating with partners and other web services more easily.[pullquote]Just as few companies today think of running their own power grid, there may come a day when companies will look to a highly consolidated group of compute utility companies to deliver some of the services they now provide.[/pullquote]

But the move also implies that many businesses will start to slowly get out of the business of hosting their own data centers, preferring to have that computing “utility” hosted by an outside party, such as an Amazon, Microsoft or Google. For companies like HP Enterprise, Dell, Cisco and others that generate significant revenue from selling enterprise hardware to companies who have been running their own data centers, the changes could be particularly profound.

Of course, not all companies will completely move away from running their own data centers, nor will the ones who start to do so make those changes overnight. But just as few companies today think of running their own power grid, there may come a day when companies will look to a highly consolidated group of compute utility companies to deliver some of the services they now provide.

Adjusting to the potential of that new reality will keep the enterprise computing market interesting to observe for several years to come.

Podcast: Walmart-Jet Purchase, Intel IDF Preview

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss Walmart’s recent purchase of Jet.com and it implications for online commerce, and analyze Intel’s strategy as it prepares for next week’s Intel Developer Forum (IDF) event.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Digital Identity Dilemma

On the one hand, the problem seems obvious. We all need some kind of consistent digital identity (think virtual ID “card”) that can identify and authenticate us not only to all our devices, but to all our online services, commerce and banking accounts, and essentially anywhere where we need to digitally, or even physically, verify who we are.

Actually solving that problem, it turns out, is pretty hard. For one, any kind of digital identity solution needs to be platform and device independent. Sure, it’s fine to be able to swipe into your phone with a fingerprint reader, but most people own more than just a smartphone, for example and, in many cases, they run different on different platforms.

Plus, merely logging into the device doesn’t transfer your credentials to all the password-protected websites you use, services you log into, etc. Sure, there’s been some useful improvement in this area over the last few years, but we’re still a long way from the nirvana of a what I like to call a portable digital identity.

Think of a portable digital identity as something akin to a digital passport that could not only identify you to known locations, but unknown situations as well. Want to be able to get immediate access to your Spotify account while using grandma’s PC? As long as she has internet access, no problem.

One of the most obvious benefits of this type of digital ID would be the eventual abolition (at least, in theory!) of passwords. We all know how horrendously broken the concept is and the amount of money, time and effort wasted—not to mention the incredible amount of frustration they regularly generate—is now measured in extraordinarily large numbers, both for individuals and companies.

Recently, there have been a number of important steps made toward achieving more universal digital identities. Key among them is the work done by industry organization the FIDO Alliance, whose members include Microsoft, Google, Intel, Qualcomm and Samsung, among many others, but noticeably lacks Apple. Last fall, the organization submitted their FIDO 2.0 Web APIs to the W3C internet standards body as part of an effort to allow digital identity and authentication credentials to be passed from device to device and device to website.

Essentially, this will enable people to leverage technologies like biometrics—using fingerprints, face recognition, iris scanning (like on Samsung’s new Galaxy Note 7), and more—to not only identify you to the local device, but to other devices as well. Even better, it will enable apps, websites and other services to seamlessly recognize you via that same identity verification. Once it’s widely adopted, this could be the ultimate “friction-removing” technology. These Web APIs should be able to dramatically change how quickly and easily we use web services, make online transactions, and much more, all while dramatically decreasing the potential for fraud and identity theft.[pullquote]The Fido 2.0 Web APIs should be able to dramatically change how quickly and easily we use web services, make online transactions, and much more, all while dramatically decreasing the potential for fraud and identity theft.[/pullquote]

Microsoft provided an early version of support for these standards in the enhanced version of Windows Hello that’s built into the new Anniversary Update of Windows 10. In fact, Microsoft is supporting what they call the Windows Hello Companion Device Framework to allow external devices, such as wearables or other Bluetooth-equipped devices with biometric sensors, to enable biometric security not only to devices that don’t have it, but to extend that level of verification to any sites or services which support FIDO 2.0.

Of course, the security questions about how this all works and how effective it will really be in the real world have been debated quite a bit. While it’s impossible to say that it’s hack-proof, the good news is that the entire effort has been built with worst-case scenarios in mind.

The technology used to enable the security can be very complex, but there are a few basic concepts worth mentioning. To start, all these efforts begin with a hardware root of trust on any end user device, such as a TPM (Trusted Platform Module), or some other kind of digital security chip, that is physically isolated from the main processor and OS. Leveraging virtualization or similar software isolation technologies, the information used to identify and verify you is encrypted and kept separate from main memory, making it extremely difficult to get access to. In fact, in most situations, it would require physically tapping into the device, which greatly reduces the risk threat in most situations. Plus, that identifying information isn’t directly passed along, but instead is only used to start the process of verification.

The net result is that highly personal biometric information is not only extremely hard to acquire, but can’t be used to directly tap into an account in the same way that a stolen password potentially can.

Even with all these efforts, we’re several steps away from a truly standardized, universal digital identity, but it’s clear that we’re much closer to the goal than even just a year ago. By later 2016 or early 2017, the W3C is likely to approve the FIDO 2.0 Web APIs and that’s bound to create some strong momentum around these extremely important standards. Your portable digital identity is nearly here….

Podcast: Samsung Note 7, Snapchat-Instagram, Uber in China

In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss Samsung’s release of its Note 7 smartphone, the battle between Snapchat and Instagram around their Stories feature, and Uber’s recent sale of its China business.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

IoT Strategies Going Vertical

A nearly existential search for meaning and direction is hitting the tech industry and its major players in a way that they’ve never truly experienced before. Why? It’s a combination of factors including market maturity, flattened growth in key categories, and the lack of a clear picture about where things are headed.

One of the few directions most seem to agree on is the growing importance of the Internet of Things (IoT). Even here, however, the general fuzziness about what IoT really is, how to best approach it, and what the real opportunities are is leading to a lot of head-scratching and strategic adjustments.

Many companies, for example, initially tried to approach IoT with a more horizontal perspective, hoping to find solutions that worked across multiple industries and applications. Fairly quickly, however, most have found that they need to refine and focus their efforts across many separate vertical applications in order to find success.

In addition, while many companies see IoT as an opportunity to expand beyond their core strengths, most are discovering those efforts will likely take much longer than they first thought. Instead, they’re finding that creative applications of what they already do offer the shortest path to success.

For example, while connectivity and compute are clearly common characteristics across most all IoT applications, smartphone component leader Qualcomm is starting to find traction in IoT by creating an extended range of reference platforms using its components across nearly 25 different applications. From drones, to wearables, to smart meters and connected cars, the company has built and shared an impressive range of specialized designs, leveraging various members of its Snapdragon CPU and modem family of SOCs.

Qualcomm has found that many of the smaller (and even some larger) players entering specific IoT markets don’t have the in-house expertise to design the circuit boards they need to drive their creations. As a result, they’re starting to attract attention thanks to the extra effort of creating these vertical-specific solutions.

Data analytics is another shared characteristic of IoT and much of that happens in the cloud or corporate data centers. Intel has recognized this opportunity and is leveraging its strength in servers and data center infrastructure to become more relevant in IoT. Rather than generically throwing x86 hardware at the problem, however, the company is increasingly focused on more specialized solutions. In particular, it is focused on the growing use of its new FPGA assets (a result of the recent Altera acquisition), which allows Intel to create programmable chips that can be ideally matched to the different types of analytics needs from different IoT markets.

In a related fashion, the people at Dell have started to create a line of IoT gateways, which are essentially industrial PCs with additional types of connectors that allow them to easily integrate into many types of IoT environments. Their goal is to build a range of different solutions that allow them to create the kind of distributed computing architectures and “fog-based” computing components (closer to the ground than “cloud”) which IoT deployments are starting to use.

Companies like HPE are leveraging their strength in big data and analytics software and services to meet the unique needs of different IoT applications. In addition, HPE has built ruggedized edge-based servers like the EdgeLine series, which incorporate data acquisition hardware from National Instruments (NI) for industrial IoT applications.[pullquote]The IoT opportunity is so large that there isn’t a single answer about how to best address it. [/pullquote]

Speaking of which, NI, best known for its test and measurement equipment, is also enabling specific IoT solutions. The company’s latest version of LabView offers a modular software platform design that incorporates a number of new elements, including some designed for data analytics. In addition, LabView Communications puts an emphasis on building and connecting together blocks of application-specific code for the radio and communications-level elements that are so essential to IoT.

Of course, these companies (and many others) also compete directly with one another in other aspects of the IoT market. But the IoT opportunity is so large that there isn’t a single answer about how to best address it. Eventually, companies can (and undoubtedly will) work to broaden their offerings across a wider range of the IoT market. In the short term, however, leveraging existing strengths across a range of different verticals seems to be the way to go.

Podcast: Apple Earnings, Smartphone Market Trends

In this week’s Tech.pinions podcast Tim Bajarin, Carolina Milanesi and Bob O’Donnell discuss Apple’s earnings in depth and look at overall trends impacting the smartphone market.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Creating New Worlds

The SIGGRAPH trade show is the holy grail of computer graphics and, increasingly, mobile graphics, virtual reality and augmented reality. It’s here where GPU vendors introduce some of their latest creations, software companies debut new offerings, and where you’ll see some of the coolest looking tech demos you’ve ever come across.

This year’s show—appropriately held across the street from Disneyland in Anaheim, CA—lived up to those expectations, with numerous announcements from multiple vendors. While they all featured their own unique perspective, they shared a similar theme of enabling the creation of entirely new worlds, much like Walt did in his own day.

nVidia kicked things off by introducing what they claim to be the world’s fastest GPU, the Quadro P6000, less than a week after they introduced their Titan X GPU, which held a very brief reign at the top of the performance heap. The critical difference, of course, is that the Quadro P6000 is a high-end professional graphics card designed for workstations, versus the consumer-oriented and gaming focused Titan X. Both are based on nVidia’s new Pascal architecture, but the P6000 ups the ante to 12 teraflops of single-precision performance (vs. 11 teraflops for the Titan X) with more graphics cores and a larger and faster bank of onboard memory.

Not to be outdone, the newly rejuvenated AMD debuted its new line of lower-cost Radeon Pro WX workstation-focused cards, which are available for under $1,000, and which take over from the previous FirePro line. In addition, AMD used their first major SIGGRAPH press event in years to highlight a new technology called SSG (Solid State Graphics), which enables the company to deliver a terabyte of memory onboard prototype graphics cards via an embedded SSD (flash-based solid-state hard drive). Apparently the cards—commercial versions are expected in 2017—leverage a PCIe peer-to-peer connection to the drive to give them extremely speedy performance.

In the mobile arena, Qualcomm took advantage of SIGGRAPH to provide some additional details about their work with Google on Tango-enabled smartphones, such as the previously announced Lenovo Phab 2 Pro. Ironically, a key element of the Qualcomm Snapdragon design used for the Tango-enabled device is that it doesn’t use the GPU at all for its augmented reality efforts. Instead, a combination of the DSP, sensor hub, and image processing elements of the chip—in conjunction with sophisticated global clocking elements that keeps disparate elements in sync—enables some promising new augmented reality applications, including the potential for new types of GPS-independent indoor mapping.

On the software side, both nVidia and AMD discussed new developments that can enable real-time viewing and streaming of high-quality (up to 4K) 360° video streams. This extremely computation-intensive task stitches together the video feeds of multiple cameras into a single video signal which can be viewed on VR headsets, either locally or remotely via the cloud. Importantly, both efforts help make the possibility of delivering non-gaming VR applications much more likely in the near future.[pullquote]Both NVidia and AMD talked about additional efforts to build tools to more quickly enable the creation of new virtual worlds.[/pullquote]

Both companies also talked about additional efforts to build tools to more quickly enable the creation of new virtual worlds. nVidia discussed their iRay software, which enables near real-time ray tracing of 3D modelled worlds, while AMD highlighted their new ProRender (previously called FireRender), which is noteworthy because AMD chose to make it completely open source.

In both cases, the companies stuck to the theme of delivering faster, easier and more efficient ways of creating virtual and augmented worlds. If either VR or AR headsets and related products are ever to reach mainstream, they’re going to be very dependent on a robust and varied set of hardware and software tools that can help in the creation of some compelling new worlds. Based on this year’s SIGGRAPH announcements, it looks like we’re moving in the right direction.

Podcast: Softbank ARM Purchase And Its Impact On The Semiconductor Market

In this week’s Tech.pinions podcast Tim Bajarin, Ben Bajarin and Bob O’Donnell discuss the surprising announcement of Softbank’s intention to purchase ARM and what its potential implications are for the future of the semiconductor market.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The State of Smart Homes

The promise was big, the hype was bigger, but the reality, well, let’s just say, not exactly earth-shattering.

Several years after the concept of the smart home started making waves—the first Nest thermostat was introduced in October of 2011—the smart home market is still more promise than practical reality. Certainly, progress has been made, and with the forthcoming version of the HomeKit-enabled Home app in iOS10 slated for this fall, more is on the near-term horizon.

But, it’s also fair to say that most people haven’t exactly been caught up in a wave of smart home euphoria. Other than the modest success of web-enabled home security cameras, most consumers haven’t felt compelled to equip their homes with connected light bulbs, appliances and other smart devices.

It’s not hard to see why. First, as has already been discussed ad infinitum, there’s a standards problem—as in, there are way too many of them, and battles exist at nearly every potential layer of the communications stack. As a result, the process of finding products that will work together is a much harder (and more limiting) research project than it should be.

On top of the obvious technical challenges, there are other more practical usability challenges. First, most devices have their own apps, which often means you have to switch from app-to-app to get common things done. In theory, some of the upcoming developments in smart home platforms—like the enhancements coming to Apple’s HomeKit mentioned earlier—should help in this regard.

An even bigger question in my mind, however, is why do we have smart home apps at all? While developers like to believe that people carry their smartphones everywhere, the truth is, for many people, home is the one place where they don’t always carry their phone. Between plugging it in to charge, setting it down with keys or other objects, or just finally taking a break from it, it’s not at all uncommon to be “phoneless” when you’re at home. Needless to say, pretty tough to do trivial yet essential things, like turning on a light, if you don’t have immediate access to the software that function might depend upon. When you’re outside your home and want to check in on something, of course, smart home apps are essential. Inside, however, the value proposition for apps isn’t always very clear.

In fact, it’s this desire to be “phone-free” at home that, in part, has triggered the growing interest in voice-based devices like Amazon’s Alexa. By leveraging a fixed location appliance that doesn’t require anything but your ability to speak to it, you can accomplish a number of things more easily, more quickly, or just plain more possibly than having to dig into your pocket or purse to get your phone, find an app, launch it and then do whatever it is you want to do.

Of course, because it’s a fixed location device, unless your living space is tiny, you could easily need several in order to get the kind of control you’d want throughout your entire home or apartment. That, in turn, leads to yet another challenge for smart home solutions: cost.

In many cases, the cost of a single smart home device really isn’t that bad. But unlike most other tech gadgets, smart home devices need to be bought in bulk—think about how many light bulbs you have around your house—in order to provide their full benefit. Needless to say, that can add up very quickly.[pullquote]Unlike most other tech gadgets, smart home devices need to be bought in bulk…in order to provide their full benefit.” [/pullquote]

Given all these costs and challenges, it’s easy to understand why smart home services, such as AT&T’s Digital Life and Vivint’s SmartHome, have enjoyed some degree of success. The barriers to success for a DIY smart home remain fairly high for most consumers, so having someone do the work of installing and integrating everything is bound to be appealing to a certain group of consumers.

Even with those pre-packaged solutions—which tend to focus primarily on security—there still really aren’t the kind of comprehensive, fully intelligent systems that many of the first visions of smart homes promised. Instead, we see more up-to-date versions of things like home automation, multi-room entertainment, and security services that, frankly, have been around for decades.

For consumers to find a compelling enough value in smart home products and services, there needs to be a lot more thought given to how people want to interact with their homes, both inside and outside, in a seamless manner. Additionally, more intelligent learning has to go on so that the elements in a smart home can self-discover and self-configure themselves with whatever other existing devices are in a home in a manner that makes them quickly and genuinely useful.

The promise of an automated home is still a compelling one, but we have to move past simply getting them connected and start making them smart.

(If you’re interested in learning more about the smart home industry opportunity, I’m pleased to be the Chairman for day one of the Smart Home Summit in Palo Alto, CA on November 1, 2016. You can learn more about the event here.)

Pokemon Go is an AR Watershed

No, it’s not the first and no, it’s not the best. But, there is no question that the incredible success of the Pokemon Go game is an absolute watershed moment for augmented reality.

Despite serious questions about security issues (both digital and physical), battery life impact and cloud infrastructure support, among other issues, the game’s incredible, nearly overnight success now means that no one will ever need to explain what augmented reality is to almost anyone.

“You know, like Pokemon Go.” “Oh, got it.”

Simple though that may sound, it’s huge. And it’s something that this still fledgling technology really needed.

The truth is, augmented reality and virtual reality are technologies that are very difficult to explain to people who haven’t had the chance to actually try them. Many tech industry insiders seem to be glossing over this as a non-issue, but for these technologies to really go mainstream at any time in the future, this is exactly what needed to happen. So, it’s important to recognize the Pokemon Go phenomenon for what it is—a game changing opportunity that sets the stage, eventually, for the success of other augmented reality-based applications and devices.

At the same time, it’s important to point out that this does not mean we will soon see a raft of successful AR-based games and other applications. We will undoubtedly see an enormous number of AR-related launches, regardless of how quickly (or not) the Pokemon Go craze fades out, but the vast majority will have little to no impact whatsoever.

As many others have rightly pointed out, the combination of Pokemon, smartphones, GPS, and AR is a match made in heaven—and one that won’t be easily captured again, at least to this degree. The ephemeral and cartoon nature of Nintendo’s famous characters, as well as the whole gestalt of what Pokemon are, and have been, to the critical millennial audience that’s at the heart of this craze, makes the Pokemon Go phenomenon a singularly unique opportunity. It is well-timed, and in spite of the concerns mentioned at the beginning, seemingly well executed.

Still, the fact that hundreds of millions of people are getting their first direct exposure and personal experience with augmented reality—simple though it may be—can’t help but be a boost to the future of this exciting new technology.[pullquote]The fact that hundreds of millions of people are getting their first direct exposure and personal experience with augmented reality—simple though it may be—can’t help but be a boost to the future of this exciting new technology.[/pullquote]

Plus, Pokemon Go is providing a number of other interesting and unexpected benefits to augmented reality, particularly around physical activity and social interaction. In an age where technology usage has led to people sitting and staring blankly into their smartphone screen as the sad new normal, the phenomenon of people moving around and talking to each other because of their tech devices is an incredibly refreshing change—short-lived though it may be.

The social impact and success of Pokemon Go also raises questions about virtual reality, which tends to be much more of an individual experience. Yes, there is work being done to make virtual reality applications more social, but the nature of the experience makes these efforts more challenging. Whether this will matter in the long run is hard to say at this point, but I think the surprisingly social nature of Pokemon Go should make VR device and application makers do some serious thinking about what lessons they can learn from it.

For augmented reality device makers, the challenge will be to show people why they need a dedicated device. If they can get the simple, but satisfying AR experience they want from a “regular” smartphone, they’re going to have to build a compelling value argument for why people need to make the leap to a separate AR device.

Of course, it’s important to bear in mind that we’re just talking about a smartphone game. Given the very finicky and quickly changing tastes of mobile gamers, Pokemon Go may not even make it as a historical footnote for this year. Still, it has the feel of being something that will have a somewhat longer-lasting impact, particularly because of the manner in which it’s introducing people to a new concept.

It’s easy to forget how difficult it is for critical tech breakthroughs to reach mainstream acceptance. But the simple, silly experience of capturing Pokemon through the compute and sensor-equipped devices we all carry—our smartphones—is going to introduce an enormous number of people to a completely different way of thinking about how tech devices can change (and improve) the manner in which they interact with the world around them. In my mind, that’s an important step forward.

Car Wars: The Battle for Automotive Tech

And so it has begun.

With the announcement late last week by BMW, Intel, and Mobileye of a new reference platform for autonomous cars—expected to be available in model year 2021 vehicles—the battle lines are being drawn for what promises to be one of the most interesting tech industry developments in the next several years.

On the one side, you have tech giants Apple and Google, who also happen to be the two most valuable companies in the world. Either in secret or openly, they’re working to create automotive platforms and possibly even cars themselves, leveraging their software and user experience expertise. On the other, you have the automakers, several of whom used to be among the list of the world’s largest companies. As a group, they are painfully aware of how critical technology has become in the car purchasing process, yet extremely concerned about how the partnerships they need to match these new requirements could lead to a loss of control, or at least, a major decline in customer influence.

In the middle, you have a range of the most important and/or most innovative semiconductor companies—names like Intel, nVidia, Qualcomm, and ARM—hungry for a new growth market and eager to cash in on what many expect to be one of strongest segments of the tech hardware economy for the next decade.

Topping off this tasty automotive tech sandwich are some of the first widespread deployments of cutting edge technologies like deep learning, neural networks, artificial intelligence and advanced connectivity technologies (think 5G), all of which are necessary to make the promise of truly autonomous cars a reality.

Toss in the disruptive business model innovations of ride-sharing companies like Uber and Lyft and things start to get even more interesting. Add in the political and regulatory intrigue bound to impact the market as the result of the unfortunate first accidental death of a driver using an autonomous driving feature on a Tesla Model S, and it’s not hard to imagine the screenplay of a movie thriller nearly writing itself.

From a platform and technology perspective, this is a battle that will have several different fronts. The two main campaigns will likely focus on a car’s infotainment system and its autonomous driving capabilities. The truth is, cars have grown into enormously complex devices (see this recent column I wrote for USAToday for more), but these two separate systems are seen as the crown jewels of automotive tech.[pullquote]The two main campaigns will likely focus on a car’s infotainment system and its autonomous driving capabilities.”[/pullquote]

While infotainment used to essentially mean the car radio and navigation system, it has blossomed into the car’s entire interface. Everything from the car’s internal app platform to heads-up displays, interactive gauges, comfort controls, driving assistance features and more, are shown to the car’s occupants through the various displays and components that make up today’s advanced infotainment systems.

At the moment, Apple’s CarPlay and Google’s Android Auto are staking out a portion of the infotainment software experience, although not all of its expanded reach. For obvious reasons, carmakers have been reticent to give over the entire user experience to the tech giants.

Underneath the hood of the infotainment system, companies like nVidia, Qualcomm, Intel, and ARM licensees such as Renesas, STMicro, and TI have all provided semiconductor components that are driving the infotainment experience. Given the increasingly visual nature of these elements, expect to see a lot more discussion around car-based graphics.

The autonomous driving capabilities are being handled through a combination of different elements, some of which are on the car and some of which are expected to be delivered via the cloud. Last week’s BMW, Intel, Mobileye announcement for example, leverages the sensor fusion and machine vision experience from Mobileye on the car, while also using local Intel CPUs and a connection to an Intel-driven cloud computing resource running deep learning and neural network applications.

Nvidia has also been prominent in moving forward with distributed computing autonomous driving experiences. The company has debuted its DrivePX2 platform for doing the on-car computing and inferencing for driver assistance and autonomous drive applications, while also talking about its cloud-based, GPU-driven neural network efforts.

In addition to these two main systems, there are a number of other critical elements, some of which provide links between them. Notably, the communications capabilities of today’s cars is about to surpass any device we own. Qualcomm, for example, just debuted a new upgradeable automotive communications platform that incorporates radios for 4G, WiFi, Bluetooth and DSRC, a technology expected to be used for future vehicle-to-vehicle communications. Connectivity is critical for both infotainment and autonomous driving (as well as things like over-the-air software upgrades and much more), so expect to see some increased competition here as well.

Despite all these recent developments, it will likely be the end of the decade before we really know how the car wars play themselves out. In the meantime, it’s going to be one of the most epic battlegrounds for both new world and old world businesses the market has seen in quite some time. Who knows? It could end up being the technological stuff of legend.

Digital Audio Progress Highlights Tech’s More Human Future

What happens when a technology gets as good as it can?

It’s an interesting question, and not necessarily as far-fetched or ill-timed as you may imagine.

Consider the world of digital audio. As a musician, music lover, former music equipment industry journalist and self-professed audiophile, I admit to caring a lot more about audio than most, but there are certain facts that are interesting for anyone to think about. We can now record and playback audio, particularly music, at a level that is arguably beyond what most any human can actually hear. Today’s HD Audio equipment supports 24-bits per “word” at recording resolutions of up to 192 kHz (and sometimes even higher). To put that in perspective, uncompressed CD-quality audio is 16-bit at a recording rate of 44.1 kHz.

In other words, today’s highest resolution stereo formats have about 6.5x more data than what many consider to be at the upper end of what the average person can discern. Also, bear in mind that many people happily listen to 128 kbps MP3 files, which stream at a rate that is less than 1/10th that of uncompressed CD-quality audio (1,411 kbps).[pullquote]From a purely technical perspective, recording resolutions could go even higher, but for any applications involving people, there’s no point: digital audio recording technology has peaked. “[/pullquote]

From a purely technical perspective, recording resolutions could go even higher, but for any applications involving people, there’s no point: digital audio recording technology has peaked. So, does that mean developments in digital audio have stopped? No, but they have gone off in lots of interesting directions, some of which could prove to be interesting predictors of where other technologies might follow.

First, as with many technologies, price points for higher-quality audio components and technologies have come down. You can now find reasonably high-quality audio outputs on toys and other low cost items. However, because the highest level of quality, HD Audio, is seen as a technology focused on a loyal, yet relatively small audience, it can still command a premium.

Audio components have also been miniaturized to fit into a wide variety of devices. In fact, there’s been a great deal of speculation recently about Apple and other vendors offering high-quality, wireless in-ear buds for the forthcoming iPhone 7 and/or any other device that chooses to forgo a standard 3.5mm headphone jack.

But this could easily prove to be a case where the technology actually gets too small. Can you imagine how many people would lose tiny earbuds that easily pop into and out of your ear? I think we may discover that, in certain cases, cords and other elements that seem to unnecessarily increase the size of some technology products are actually more useful (and important) than most people realize.[pullquote]Even more interesting is the conscious decision to return to audio formats and audio quality that are arguably or unquestionably worse than what’s possible. “[/pullquote]

Even more interesting is the conscious decision to return to audio formats and audio quality that are arguably or unquestionably worse than what’s possible. For example, the resurgence of recorded music on vinyl has proven to be much more than a fad, particularly among millennials. Now, debates about the quality of analog vinyl versus digital recordings is essentially a religious one that’s been going on since the introduction of the CD. However, you can now make an argument that digital versions have become more accurate than vinyl.

In the case of musical equipment, analog synthesizers have seen a remarkable resurgence over the last several years, being integrated into an enormous range of musical styles. In addition, some of the most popular recording effects are variations on what are termed “bit crushers”—effects that intentionally reduce the number of bits in a digital audio stream in order to create a lower-quality, but unique-sounding signal.

What’s interesting about these last few examples is that they have brought audio out of the more conceptual, purely digital world, back into the tangible, physical world. You can hold and flip vinyl; you can turn lots of knobs on analog synthesizers; you can make enjoyable sounds that aren’t the best possible quality. In short, you can physically interact with the technology in a very pleasing, very human way.

It’s a feeling that many people realize they’ve missed with their soul-less touchscreen-based devices. I think it’s also a feeling that many other product designers are going to incorporate into their future products, across a wide range of technology-driven categories.

At the same time, the advancements in digital audio technology are allowing a higher quality experience than we’ve ever been able to enjoy. With the right kind of digital music files, recorded, mixed, and mastered in high-resolution form (unfortunately, a tiny fraction of available digital music), played back on the right kind of HD Audio equipment, you can experience a level of audio fidelity, sense of space, and overall musicality that makes the technology completely fade away. In a word, pure audio bliss.

Taken together, it’s the ability to both achieve a level of technological perfection and force the exploration of a new means of interaction that makes digital audio a potentially interesting proxy for where other technologies may head. In both instances, it’s driving a more human-centered approach to technology, which is bound to lead to some interesting developments to come.

Podcast: The Post Device Era

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss a recent Fast Company article written by Bob O’Donnell on the Post Device Era and its implications.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

IoT Faces Challenges with Scale

One of the core tenets of any business or technological initiative is, in order to achieve mainstream success and widespread adoption, the primary concept must be able to scale. Sure, it is a great proof-of-concept if you can effectively deploy a technology in one location, but if you want to make a major impact, you have to be able to replicate that ability across many places.

Unfortunately, achieving scale often does not come easy—or at all.

Because of often minor (and sometimes major) differences between locations, environments, equipment, personnel, processes and many other factors, the solutions put together in one context often do not work in another.

Early adopters of IoT (Internet of Things) products and technologies in business environments have started to discover these scale challenges are very real. As a result, their IoT deployments are moving at a much slower pace than they originally hoped. In fact, many organizations are still in the POC (Proof of Concept) stage for IoT, even after several years of trying.

Given all the hype and discussion around Enterprise IoT, this is proving to be very frustrating for both end customers and the many technology companies and solution partners selling IoT-related products and services. After all, many in the press, analyst, and vendor communities have been touting IoT as the “Next Big Thing”, with the ever-growing predictions of connected devices and dollars spent on the initiatives reaching almost laughable proportions.

However, once you get past the appealing concept of IoT and all it potentially enables and actually dig into the practical realities of where most companies are today, you can quickly start to see the problems. In addition to the operational and financial challenges associated with IoT I’ve written about previously, the need for highly specialized and highly customized solutions makes IoT difficult to scale.

Imagine, for example, a manufacturing company that wants to leverage IoT-related technologies to modernize its operations, improve its manufacturing efficiency, and gather better analytics about its overall operations. More than likely, they have multiple manufacturing sites with different types (and ages) of manufacturing equipment which, in turn, create different types of workflows.

Just dealing with the different ages of the manufacturing equipment on one site is often challenge enough. Multiply that by the number of different sites a company has and the problems become much harder. In the computer-dominated world of IT, the concept of “legacy” devices and software often refers to something that’s five years old. In the world of manufacturing and operations, it’s not uncommon to find fully functional equipment that’s 35 or more years old. As a result, it’s extremely challenging to figure out ways to get a consistent set of data to analyze across all these different devices.

Modern manufacturing equipment likely offers a whole range of data feeds, a wide selection of connectivity options, and straightforward means to integrate the data output into modern data analytics software. Older equipment on the other hand, likely requires retrofitting of sensors, connectivity, and simple compute endpoints in order to generate any kind of meaningful data at all. However, achieving those upgrades typically requires bringing in a team of outside specialists with deep knowledge of not only a specific industry but also the specific company and that worksite location.

The simple solution would be to just replace all the older manufacturing equipment but, given the high capital outlays required to do so, it’s not a realistic option. Plus, that’s just not how people in the operations world think or work—they’re focused on utilizing the equipment they oversee as long as possible—and that isn’t likely to change anytime soon.

These types of challenges aren’t limited to manufacturing companies, by the way. There are different, though analogous, challenges for companies across a wide range of industries, from transportation and logistics, to health care, food service, and much more.

[pullquote]IoT in business environments is not a product or even a technology, it’s a process. That makes it extremely challenging to scale.”[/pullquote]

Part of the problem is most people aren’t thinking about IoT in the right way. IoT in business environments is not a product or even a technology, it’s a process. That makes it extremely challenging to scale. Another issue is many companies get so caught up in the potential for IoT’s transformative potential, they become overwhelmed with options and don’t know how or where to start.

So, does this mean all is lost when it comes to Enterprise IoT and that we’ll one day look back on it as yet another technological passing fad? Hardly. There is a reason the vision of billions of connected devices and all the potential information and capabilities they can enable is such a compelling concept. There is a real “there” there and the prospective value IoT offers is an attractive proposition that will keep smart people and smart companies working towards bringing at least some of its potential to life for some time to come.

The timelines for when any meaningful payoffs arrive and the pace at which the technology will actually be deployed however, are in need of some serious re-examination. Achieving scale in a process-driven business will not come quickly and companies at all levels of the IoT value chain need to adjust their expectations accordingly.

Apple Drives Apps into Services

The keynote event at the annual Apple Worldwide Developer Conference (WWDC) is usually about apps. This year, however, things were a bit different.

Oh sure, Apple touted 2 million apps in the iOS app store, over 6,000 available for tvOS, and highlighted the work of a few small developers, but the key takeaway from this year’s event was all about services: Siri, Messages, Maps, and Music.

For those who closely watch Apple, this probably isn’t too surprising as the company has been receiving pressure to focus more on services in light of its hardware declines, and it has been more vocal about its efforts to expand services.

What became very interesting to me as I watched the keynote and then thought about all the new capabilities the company introduced, however, is that, in many ways, Apple is subsuming the capabilities of standalone apps into its services.[pullquote]In many ways, Apple is subsuming the capabilities of standalone apps into its services.[/pullquote]

The most compelling demos at the event all involved integrating capabilities that used to require launching a separate app directly into an expanded Apple service. Want to call an Uber or a Lyft and see where it is? You can do so via Messages and Maps. Want to view rich web site links and even make purchase transactions or send money via Square or other online payment methods? All that can happen without leaving Messages as well.

This decreased focus on individual apps and increase focus on services isn’t necessarily a bad thing. In fact, it’s arguably the general direction that software development—particularly on mobile devices—is going. In China, for example, you can get access to these kinds of services and much more in WeChat. These developments do, however, imply a fairly major shift in the role that developers can and will have with Apple and end users. They also imply a major rethink in terms of what an “app” actually is.

In essence, the new app model in this services-focused approach is a “service extension,” which strikes me as being much more similar to the “skills” you can add to an Amazon Echo than a traditional app.

The problem is that this model almost completely cuts out the importance, and end user awareness, that app developers have. Does anyone think of the “skills” they add to an Echo in the same way they do an app? Oh yeah, and what about monetization for that service extension developer?

Given the overall app fatigue that many people face, as well as the flattening hardware sales, it’s getting tough enough for people to survive as app developers. Add in this additional layer of abstraction and the distance between developers and end users will likely increase at about the same rate that dollars from them decrease—pretty fast.

For Apple, the situation is a challenging one. On the one hand, it’s expected that they will continue to drive forward the overall experience of using their devices. They need to provide capabilities that their end users want, and it’s becoming increasingly clear that a more seamless, less app-focused approach is the way to go. On the other hand, by subsuming not only the functionality of individual apps, but the manner in which developers can extend the experience of using their devices, they’ve created a bigger business challenge for many developers.

To Apple’s credit, the company opened up Siri, Messages and Maps to outside developers for the first time with the announcements at this year’s WWDC. Given the company’s notoriously tight grip on core system elements and their desire to control the entire Apple device experience, this is a significant development. But given the overall market trends towards services, you could make the argument that they essentially had to in order to give their developers a fighting chance.

The ongoing shift from software to services is something that extends well beyond Apple, of course. Microsoft, Google, Amazon and Facebook all face different types of challenges as we move into a world where we are much less dependent on individual devices and specific OS platforms. But as the company who arguably made “apps” what they are, Apple and its developers will likely face some of the most challenging transitions.

Some of these services-focused changes may take several years to play out, but they’re clearly leading the tech industry down some new paths.

Podcast: Apple App Store, Lenovo-Motorola Smartphones

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the changes that Apple made to some of their app store policies and analyze the new smartphone offerings from Lenovo and Motorola, including the Google Tango-based Phab Pro 2 and the modular Moto Z.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Evolution of Cloud Computing

Servers are probably not near the top of your list for conversation topics nor are they something most people think about. But, it turns out there are some interesting changes happening in server design that will start to have a real world impact on everyone who uses both traditional and new computing devices.

Everything from smart digital assistants to autonomous cars to virtual reality is being enabled and enhanced with the addition of new types of computing models and new types of computing accelerators to today’s servers and cloud-based infrastructure. In fact, this is one of the key reasons Intel recently doubled down on their commitment to server, cloud, and data center markets as part of the company’s evolving strategy.

Until recently, virtually all the computing effort done on servers—from email and web page delivery to high performance computing—was done on CPUs, conceptually and architecturally similar to the ones found in today’s PCs.

Traditional server CPUs have made enormous improvements in performance over the last several decades, thanks in part to the benefits of Moore’s Law. In fact, Intel just announced a Xeon server CPU, the E7 V4, which is optimized for analytics, with 24 independent cores this week.

Recognizing the growing importance of cloud-based computing models, a number of competitors have tried to work their way into the server CPU market but, at last count, Intel still owns a staggering 99% share. Qualcomm has announced some ARM-based server CPUs and Cavium introduced their new Thunder X2 ARM-based CPU at last week’s Computex show in Taiwan but both companies face uphill battles. A potentially more interesting competitive threat could come from AMD. After a several year competitive lull, AMD is expected to finally make a serious re-entry into the server CPU market this fall when their new x86 core, code-named Zen, which they also previewed at Computex, is expected to be announced and integrated into new server CPUs.

Some of the more interesting developments in server design are coming from the addition of new chips that serve as accelerators for specific kinds of workloads. Much as a GPU inside a PC works alongside the CPU and powers certain types of software, new chips are being added to traditional servers in order to enhance their capabilities. In fact, GPUs are now being integrated into servers for applications such as graphics virtualizations and artificial intelligence. The biggest noise has been created by NVIDIA with their use of GPUs and GPU-based chips for applications like deep learning. While CPUs are essentially optimized to do one thing very fast, GPUs are optimized to do lots of relatively simple things simultaneously.

Visual-based pattern matching, at the heart of many artificial intelligence algorithms, for example, is ideally suited to the simultaneous computing capabilities of GPUs. As a result, NVIDIA has taken some of their GPU architectures and created the Tesla line of accelerator cards for servers.

While not as well known, Intel has actually offered a line of parallel-processing optimized chips they call Intel Phi to the supercomputing and high-performance computing (HPC) market for several years. Unlike Tesla (and forthcoming offerings AMD is likely to bring to servers), Intel’s Phi chips are not based on GPUs but a different variant of its own X86-based designs. Given the growing number of parallel processing-based workloads, it wouldn’t be surprising to see Intel bring the parallel computing capabilities of Phi to the more general purpose server market in the future for machine learning workloads.

In addition, Intel recently made a high profile purchase of Altera, a company that specializes in FPGAs (field programmable gate arrays). FPGAs are essentially programmable chips that can be used to perform a variety of specific functions more efficiently than general purpose CPUs and GPUs. While the requirements vary depending on workloads, FPGAs are known to be optimized for applications like signal processing and high-speed data transfers. Given the extreme performance requirements of today’s most demanding cloud applications, the need to quickly access storage and networking elements of cloud-based servers is critical and FPGAs can be used for these purposes as well.

Many newer server workloads, such as big data analytics engines, also require fast access to huge amounts of data in memory. This, in turn, is driving interest in new types of memory, storage, and even computing architectures. In fact, these issues are at the heart of HP Enterprise’s The Machine concept for a server of the future. In the nearer term, memory architectures like the Micron and Intel-driven 3D Xpoint technology, which combines the benefits of traditional DRAM and flash memory, will help drive new levels of real-time performance even with existing server designs.[pullquote]Today’s servers have come a long way from the PC-like, CPU-dominated world of just a few years back. [/pullquote]

The bottom line is that today’s servers have come a long way from the PC-like, CPU-dominated world of just a few years back. As we see the rise of new types of workloads, we’re likely to see even more chip accelerators optimized to do certain tasks. Google, for example, recently announced a TPU, which is a chip they designed (many believe it to be a customized version of an FPGA) specifically to accelerate the performance of their TensorFlow deep learning software. Other semiconductor makers are working on different types of specialized accelerators for applications such as computer vision and more.

In addition, we’re likely to see combinations of these different elements in order to meet the wide range of demands future servers will face. One of Intel’s Computex announcements, for example, was a new server CPU that integrated the latest elements of its Xeon line with FPGA elements from the Altera acquisition.

Of course, simply throwing new chips at a workload won’t do anything without the appropriate software. In fact, one of the biggest challenges of introducing new chip architectures is the amount of effort it takes to write (or re-write) code that can specifically take advantage of the new benefits the different architectures offer. This is one of the reasons x86-based traditional CPUs continue to dominate the server market. Looking forward, however, many of the exciting new cloud-based services need to dramatically scale their efforts around a more limited set of software, which makes the potential opportunity for new types of chip accelerators a compelling one.

Diving into the details of server architectures can quickly get overwhelming. However, having at least a basic understanding of how they work can help give you a better sense of how today’s cloud-based applications are being delivered and might provide a glimpse into the applications and services of tomorrow.

Voice-Based Computing with Digital Assistants

It’s been a long time coming, but it looks like the era of voice-driven computing has finally arrived.

Powered by the latest advancements in artificial intelligence and deep learning, the new generation of smart digital assistants and chatbots are clearly some of the hottest developments in the tech industry. Not only are they driving big announcements from vendors such as Google, Microsoft, Amazon, Facebook and Apple, they’re expected to enable even bigger changes long term.

In fact, as the technology improves and people become more accustomed to speaking to their devices, digital assistants are poised to change not only how we interact with and think about technology, but even the types of devices, applications and services that we purchase and use. The changes won’t happen overnight, but the rise of these voice-driven digital helpers portends some truly revolutionary developments in the tech world.

Fine and good, you say, but what about the here and now? Short term, expect to see a lot of efforts geared towards improving the accuracy and reliability of our interactions with these assistants. We’ve all either made or heard jokes about the “creative” interpretations of various requests that Siri and other digital assistants have made. While they may seem funny at first, these types of experiences quickly tire people of using voice-driven interactions. In fact, many people who initially tried these assistants stopped using them because of their initial bad experiences.

To overcome this, vendors are spending a lot of time fine-tuning various parts of the interaction chain, from initial speech recognition to server-based analysis. For example, on some devices, companies are able to leverage enhanced microphones and audio signal processing algorithms. As with so many things in life, speech recognition often suffers from a garbage in, garbage out phenomena. In other words, the quality and “cleanliness” of the audio signal being processed can have a big impact on the accuracy of the recognition. The more work that can be done to pre-process and filter the audio before it’s analyzed, the better. (FYI, this is also true for image-based recognition—image processing engines on today’s advanced smartphones are increasingly being used to “clean up” photos and optimize them for recognition.)

The real heavy lifting typically occurs on the back end, however, as enormous cloud-based data centers are typically employed to interpret the audio and provide the appropriate response. This is where huge advancements in pattern-based deep learning algorithms are helping not only improve the accuracy of recognition, but also, more importantly, the relevance of the response.

Essentially, the servers in these data centers quickly compare the results of the incoming audio snippets to enormous databases of keywords, key phrases and portions of spoken words known as phonemes, in order to find matches. In many cases, individual words are then combined into a phrase, and that combined phrase is then compared to yet another database to find more matches. Ultimately, the entirety of what was said is pieced together, and then more work is done to provide an appropriate response to the request.

Improvements in accuracy will come from a combination of increasing the size and level of detail in the various databases, along with advancing the speed and filtering techniques of the pattern matching algorithms at the heart of these artificial intelligence engines. In addition, vendors are just beginning to leverage the increased number of sensors available in smartphones and other voice input devices in order to start providing better context about where a person is located or what that person is doing, in order to improve the appropriateness of the response.

For example, asking what the temperature is in a particular location typically provides the outside temperature, but if you actually wanted to know the temperature inside a room or building, you would have to combine the temperature from a sensor along with the original request to generate a more accurate response.[pullquote]By having a better sense of context, a smart digital assistant can actually start providing information even before it’s been asked.”[/pullquote]

Though subtle, these kinds of contextual cues can greatly improve the appropriateness of a digital assistant’s response. These kinds of efforts will also be essential to help drive the next stage of digital assistance: proactive suggestions, information, and advice. Up until this point, much of the voice-based computing work described has occurred only in reaction to a user’s requests. By having a better sense of context, a smart digital assistant can actually start providing information even before it’s been asked.

Context can come not only from sensors, but awareness of, for example, information we’ve been searching for, documents we’ve been working on and much more.

Now, if implemented poorly, these proactive efforts could quickly become even more annoying than the sometimes laughable reactive responses to our voice requests. But if the concept is done well, these kind of efforts can and will turn digital assistants into very beneficial helpers that could drive voice-based computing into a new age.

Turning Makers into Manufacturers

If you ever start to wonder whether there’s anything more out there these days than a rehashing of existing ideas in the tech field, a quick trip to a Maker Faire will answer your question definitively. Absolutely.

At the 11th annual Bay Area Maker Faire, held at the San Mateo County Fairgrounds this past weekend, there was an abundance of wild, wacky, fun, ingenious, clever projects and ideas on display. Most of them have a tech angle—though not all—and some of them are meant to be the start of, or enablers for, full-fledged commercial endeavors.

The simultaneous growth of crowdfunding sites like Indiegogo alongside the rising popularity of the Maker movement has had a particularly strong impact on Makers, inventors and entrepreneurs who want to turn their ideas into real businesses—particularly those who want to build something. It’s now very feasible to go from an original concept to a commercial success using nothing more than what was available from companies demoing their wares at the Maker Faire.

From Arduino, Raspberry Pi and all their single-board computer brethren to 3D printers, desktop-size laser-etching machines, and low-cost CNC (computer numerical control) mills, the Maker Faire offers an amazing collection of tools that regular people did not have access to even just a few years back. Using these kinds of devices, individuals can now create products that look and function on a quality level that is as high (if not higher!) than many major commercial entities.

The timing of these developments is particularly relevant for the tech hardware business. We’ve entered an era when many of the innovations in major devices—smartphones, PCs, tablets—have noticeably slowed. However, there’s an explosion of innovation going on in more specialized areas like consumer and commercial IoT applications. The Maker movement is ideally suited to drive critical innovations in IoT because of the kind of specialization and customization that many IoT applications require.[pullquote]The Maker movement is ideally suited to drive critical innovations in IoT because of the kind of specialization and customization that many IoT applications require.”[/pullquote]

Plus, the truth is, many of these kinds of products are probably going to be much lower volume than would be necessary to justify their ongoing existence in large companies. But they are likely more than enough to sustain and grow a wide range of small businesses. In other words, the kind of creative hardware projects that stem from Makers are exactly what the world needs right now.

But knowing there’s a need and turning that killer idea into a real business still isn’t that easy. For one, it’s hard to find people who are experts in all the disciplines used to make a modern connected device. One possibility is to leverage the creative communities that have built up around the Maker movement—both virtually and physically—to learn the requisite skills. A related possibility is to join something like TechShop, the Maker-friendly company that offers all the kinds of tools I was describing earlier (and even bigger, more professional versions), and take some of the classes they offer to learn how to use those tools.

Once you have the skills, you have to move onto the manufacturing, and this is where many people get stuck. If you’re fortunate, you can connect with funding sources like venture capital firms and get connected to manufacturing sites like factories in Shenzen, China or perhaps to a US-based contract manufacturing firm like Flextronics. But most of those efforts are for companies who want to build in the tens or hundreds of thousands of units, and who are shooting to be the next billion dollar unicorns.

Many of the Maker-driven efforts are likely to look at hundreds or just a few thousand units and businesses that create revenues that shoot for six or maybe seven digits—not nine or ten. Until recently, this was a tough spot to be in. Thankfully, one of the more interesting announcements from the past weekend’s Maker Faire was a new collaboration between Indiegogo and Arrow, the large electronics distribution firm. The companies are putting together what they’re calling a platform to move from crowdfunding idea to full production, with the promise of lots of help along the way.

There is a range of options available, with a lucky few projects receiving $500,000 worth of support, but it basically allows Maker inventors to get access to Arrow’s product design tools and engineers, prototyping capabilities, supply chain expertise and much more. Arrow plans to assess and then select a variety of Indiegogo projects based on their technical viability, manufacturability and potential market impact and then mark certain ones with an Arrow logo. From there, they will work with those individuals/companies to make sure their products come to market—not always a given for many crowdfunded ideas.

As we move deeper into the Internet of Things and continue to see Makers help drive the kinds of innovations that are going to be necessary to succeed in this burgeoning field, removing impediments that keep creative ideas from becoming reality will be essential. So it’s great to see new efforts that really can turn Makers into manufacturers.

Podcast: Google I/O, Nokia Phones, Apple in India, Maker Faire

In this week’s Tech.pinions podcast Ben Bajarin, Carolina Milanesi and Bob O’Donnell analyze Google’s I/O event, discuss Microsoft’s sale of its featurephone and Nokia’s licensing of phones, address Apple’s efforts in India, and describe a new agreement between Indiegogo and Arrow to enable low-volume manufacturing for members of the Maker Movement.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast