The Robotic Future

Of all the futuristic technologies that seem closer to becoming mainstream each day, robotics is the one that is likely to elicit both the strongest and widest range of reactions. It’s not terribly surprising if you really think about it. After all, robots in various forms offer the potential for both the most glorious beneficence and the most insidious evil. From performing superhuman feats to the complete destruction of the human race, it’s hard to imagine a technology that could have a more wide-ranging impact.

Of course, the practical reality of today’s robots is far from either of these extremes. Instead, they’re primarily focused on freeing our lives and our businesses of the drudgery of mundane tasks. Whether it’s automatically sweeping our floors or rapidly piecing together elements on an assembly line, the robots of today are laser-focused on the practical. Still, whenever most people think about robots in any form, I’m guessing visions of dystopian robot futures silently lurk in the back of their minds–whether people want to admit it or not.

We can’t help it, really. We have all been exposed to so many types of robotic visions in our various forms of entertainment for so long that it’s hard to imagine not being at least somewhat affected. Whether through the pioneering science fiction novels of Isaac Asimov, the giddy futurism of the Jetsons cartoons, the hellish destruction of the Terminator movies, or countless other examples, we all come to the concept of robotics with preconceived notions. Much more than with any other technology, it’s very difficult to approach robotics objectively.

Now that we’re starting to see some more interesting new advances in robotics-driven services—such as food and package delivery and, eventually, autonomous cars—the question becomes how will those loaded expectations impact our view and acceptance of these new offerings. At a simplistic level, it’s easy to say—and likely true—that we can accept these basic capabilities for what they are: minor conveniences. No need to worry about robotic delivery carts causing much more damage than scaring a few pets, after all.

In fact, initially, there is likely to be a “cool” factor of having something done by a robot. Just as with other new technologies, it may not even matter if it’s the best or most efficient way of achieving a particular task: the novelty will be considered a value unto itself. Eventually, though, we’ll likely start to turn a more critical eye to these capabilities, and only those that can offer some kind of lasting value will succeed.

But the real challenge will come when we start to combine robotics with Artificial Intelligence (AI) and deep learning. That’s where things can (and likely will) start to get both really exciting and really scary. The irony is that to achieve the kind of “Asimovian” robotic benevolence that our most positive views of the technology bring to mind—whether that be robotic surgery, butler-like personal assistant services, or other dramatical beneficial capabilities—the machines are going to have to get smarter and more capable.

However, we’ve also seen how that movie ends—not well. Though admittedly a bit irrational, there’s no shaking the fear that we’re rapidly approaching a point in the evolution of technology—driven by this inevitable blending of robotics and software-driven machine learning—where some really big societal-impacting trends could start to develop. We won’t really be able to recognize them for some time, but it does feel like we’re on the cusp.

Of course, there is also the potential for some incredibly positive developments. Removing people from dangerous conditions, helping extend our ability to further explore both our world and our universe, letting people focus on the things that really matter to them, instead of things they have to do. As we move forward with robotics-driven technological advances and transition from science fiction to reality, the possibilities are indeed endless.

We should be ever mindful, however, of just how far we are willing to go.

Podcast: Facebook F8 Conference, Apple Diabetes Tool

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the wide range of developments from this week’s Facebook F8 Conference, as well as rumors that Apple may be developing a tool for monitoring diabetes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Should Apple Build a Car?

As your mother or other caregiver likely told you as a child, just because you can do something, doesn’t mean you necessarily should.

So, given last week’s news that Apple has obtained a permit to test drive three autonomous cars on public streets and highways in California, the existential question that now faces the company’s Project Titan car effort is, should they build it?

Of course, the answer is very dependent on what “it” turns out to be. There’s been rampant speculation on what Apple’s automotive aspirations actually are, with several commentaries suggesting that those plans have morphed quite a bit over the last few years, and are now very different (and perhaps more modest) than they originally were.

While some Apple fans are still holding out hope for a fully-designed Apple car, complete with unique exterior and interior physical design, a (likely) electric drivetrain, and a complete suite of innovative software-driven capabilities—everything from autonomous and assisted driving features, the in-vehicle infotainment (IVI) system, and more—other observers are a bit less enthusiastic. In fact, the more pragmatic view of the company creating autonomous driving software for existing cars—especially given the news on their public test driving effort—has been getting much more attention recently.

Regardless of what the specific elements of the automotive project turn out to be, there remains the philosophical question of whether or not this is a good thing for Apple to do. On the one hand, there are quite a few major tech players who are trying their hands at autonomous driving and connected car-related developments. In fact, many industry participants and observers see it as a critical frontier in the overall development and evolution of the tech industry. From that perspective, it certainly makes sense for Apple to, at the very least, explore what’s possible, and to make sure that some of its key competitors can’t leapfrog them in important new consumer technologies.

In addition, this could be an important new business opportunity for the company, particularly critical now that many of its core products for the last decade have either started to slow or are on the cusp of hitting peak shipment levels. Bottom line, Apple could really use a completely different kind of hardware hit.

The prospect is particularly alluring because some research conducted by TECHnalysis Research last fall shows that there is actually some surprisingly large pent-up demand (in theory at least) for an Apple-branded car. In fact, when asked about the theoretical possibility of buying just such an automobile, 12% of the 1,000-person sample said they would “definitely” buy an Apple car. (Note that 11% said they would definitely buy a Google-branded car.) Obviously, until such a beast becomes a reality, this is a completely speculative exercise, but remember that Tesla currently has a tiny fraction of one percent of car sales in the US.

Look at the possibility of an Apple car from another perspective, however, and a number of serious questions quickly come to mind. First, is the fact that it’s really hard to build and sell a complete car if you’re not in the auto industry. From component and supplier relationships, to dealer networks, through government-regulated safety requirements, completely different manufacturing processes, and significantly different business and profitability models, the car business is not an easy one to successfully enter at a reasonable scale. Sure, there’s the possibility of finding the auto equivalent of an ODM (Original Device Manufacturer) to help with many of these steps, but there’s no Foxconn equivalent for cars in terms of volume capacity. At best, production levels would have to be very modest for an ODM-built Apple car, which doesn’t seem like an Apple thing to do.

Speaking of which, the very public nature of the auto business and the need to reveal product plans and subject products for testing well in advance of their release is also very counter to typical Apple philosophy. Similarly, while creating software solutions for existing car makers is technically intriguing, the idea of Apple merely supplying a component on products that are branded by someone else seems incredibly unlikely. Plus, most car vendors are eager to maintain their brand throughout the in-car experience, and giving up the key software interfaces to a “supplier” isn’t attractive to them either.

So, then, if it doesn’t make sense or seem feasible to offer just a portion of an automotive experience and if doing a complete branded car seems out of reach, what other options are left? (And let’s be honest—in an ideal situation, autonomous driving capabilities should be completely invisible to the driver, so what’s the brand value for offering that?)

Theoretically, Apple could come up with some type of co-branded partnership arrangement with a willing major car maker, but again, does that seem like something Steve would do?

There’s no doubt Apple has the technical ability and financial wherewithal to pull off an Apple car if they really wanted to, but the practical challenges it faces suggest it’s probably not their best option. Only time will tell.

Podcast: Huawei Analyst Summit, Le Eco, Chinese Vendors

In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss Huawei’s recent analyst summit event, difficulties facing Le Eco, and the overall opportunities and challenges for Chinese vendors to break into the US and WW markets in a meaningful way.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Little Data Analytics

For years, the mantra in the world of business software and enterprise IT has been “data is the new gold.” The idea was that companies of nearly every shape and size across every industry imaginable were essentially sitting on top of buried treasure that was just waiting to be tapped into. All they needed to do was to dig into the correct vein of their business data trove, and they would be able to unleash valuable insights that could unlock hidden business opportunities, new sources of revenue, better efficiencies, and much more.

Big software companies like IBM, Oracle, SAP, and many more all touted these visions of data grandeur and turned the concept of big data analytics, or just big data, into everyday business nomenclature.

Even now, analytics is also playing an important role in the Internet of Things (IoT), on both the commercial and industrial side, as well as on the consumer side. On the industrial side, companies are working to mine various datastreams for insights into how to improve their processes, while consumer-focused analytics show up in things like health and fitness data linked to wearables, and will soon be a part of assisted and autonomous driving systems in our cars.

Of course, the everyday reality of these grand ideas hasn’t always lived up to the hype. While there certainly have been many great success stories of companies reducing their costs or figuring out new business models, there are probably an equal (though unreported) number of companies that tried to find the gold in their data—and spent a lot of money doing so—but came up relatively empty.

The truth is, analytics is hard, and there’s no guarantee that analyzing huge chunks of data is going to translate into meaningful insights. Challenges may arise from applying the wrong tools to a given job, not analyzing the right data, or not even really knowing exactly what to look for in the first place. Regardless, it’s becoming clear to many organizations that a decade or more into the “big data” revolution, not everyone is hitting it rich.

Part of the problem is that some of the efforts are simply too big—at several different levels. Sometimes the goals are too grandiose, sometimes the datasets are too large, and sometimes the valuable insights are buried beneath a mound of numbers or other data that just really isn’t that useful. Implicit in the phrase “big data,” as well as the concept of data as gold, is that more is better. But in the case of analytics, a legitimate question worth considering: Is more really better?

In the world of IoT, for example, many organizations are realizing that doing what I call “little data analytics” is actually much more useful. Instead of trying to mine through large datasets, these organizations are focusing their efforts on a simple stream of sensor-based data or other straightforward data collection work. For the untold number of situations across a range of industries where these kinds of efforts haven’t been done before, the results can be surprisingly useful. In some instances, these projects create nothing more than a single insight into a given process for which companies can quickly adjust—a “one and done” type of effort—but ongoing monitoring of these processes can ensure that the adjustments continue to run efficiently.

Of course, it’s easy to understand why nobody really wants to talk about little data. It’s not exactly a sexy, attention-grabbing topic, and working with it requires much less sophisticated tools—think Excel spreadsheet (or the equivalent) on PC, for example. The analytical insights from these “little data” efforts are also likely to be relatively simple. However, that doesn’t mean they are less practical and valuable to an organization. In fact, building up a collection of these little data analytics could prove to be exactly what many organizations need. Plus, they’re the kind of results that can help justify the expenses necessary for companies to start investing in IoT efforts.

To be fair, not all applications are really suited for little data analytics. Monitoring the real-time performance of a jet engine or even a moving car involves a staggering amount of data that’s going to continue to require the most advanced computing and big data analytics tools available.

But to get more real-world traction for IoT-based efforts, companies may want to change their approach to data analytics efforts and start thinking small.

Podcast: Apple Mac Pro, IPad As PC, Surface J.D. Power

In this week’s Tech.pinions podcast Carolina Milanesi, Ben Bajarin and Bob O’Donnell discuss Apple’s recent announcements about future iterations of the Mac Pro, the potential (or not) of an iPad as a PC, and J.D. Power’s satisfaction ratings putting Microsoft’s Surface above the iPad.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Samsung Building a Platform Without an OS

For the last 20+ years, the traditional thinking in the tech industry has been that in order to have any real power and influence, you had to have an operating system. Companies like Microsoft, Apple, and Google have turned their OS offerings into platforms, which could then be leveraged to provide additional revenue-generating services, as well as drive the direction and application agenda for other companies who wanted to access the users of a particular OS.

In an effort to follow that strategy, we’ve witnessed a number of companies try, unsuccessfully, to reach a position of power and control in the tech industry by building or buying operating systems of their own. From Blackberry, to HP and LG (with WebOS), to Samsung (with Tizen), there have been numerous efforts to try to replicate that OS-to-platform strategy.

Over the last year or so, however, we’ve begun to see the rise of platforms that are built to be independent from an OS. Prominent among these are Amazon, with Alexa, Facebook with, well, Facebook, and most recently, Samsung with a whole set of services that, while initially focused on their hardware, actually reflect a more holistic view of a multi-connection, multi-device world.

Interestingly, even many of the traditional OS vendors are starting to spend more time focusing on these “metaplatform” strategies, as they recognize that the value of an OS-only platform is quickly diminishing. Each of the major OS vendors, for example, is placing increased emphasis on their voice-based assistants—most of which are available across multiple traditional OS boundaries—and treating them more like the OS-based platforms of old.

Moving forward, I suspect we will see more machine learning and artificial intelligence (AI)-based services that may connect to the voice-based assistants or the traditional OS’s, but will actually be independent of them. From intelligent chatbots, that enable automated tech support, to sales and other common services, through smart news and media-delivery applications, these AI-based services are going to open up a sea of new opportunities for these “new” platform players.

Another key new service will likely be built around authentication and digital identity capabilities. This will serve not only as a first log-in of the day, but function as an identity gateway through e-commerce, online banking, secure communications, and many other key services that require verification and authentication of one’s identity.[pullquote]While some OS-independent platform strategies have been known for some time, the recent Samsung S8 launch event unveiled the first real glimpse of what Samsung may have in mind going forward.[/pullquote]

While some of these OS-independent platform strategies have been known for some time, the recent Samsung S8 launch event unveiled the first real glimpse of what Samsung may have in mind going forward. Because of the company’s extensive range of not only consumer tech products, such as smartphones, tablets, wearables and PCs, but also TVs and other consumer electronics, along with white goods like connected appliances, Samsung is uniquely positioned to deliver the most comprehensive connected hardware (and connected home) story of almost any company in the world. In fact, with the recent purchase of Harman—a major automotive component supplier—they can even start to extend their reach into connected cars.

To date, the company hasn’t really leveraged this potential position of power, but it looks like they’re finally starting to do so. Samsung Pass, for example, moves beyond the simple (though critical) capability of digital payments offered in Samsung Pay, to a complete multi-factor biometric-capable identity and vertification solution. Best of all, it appears to be compatible with the FIDO Alliance standard for the passing of identity credentials between devices and across web services, which is going to be a critical capability moving forward.

On a more concrete level, the Bixby Assistant on the S8, of course, provides the kind of voice-based assistant mentioned previously, but it also potentially ties in with other Samsung hardware. So, for example, you will eventually be able to tell Bixby on your Samsung phone to control other Samsung-branded devices or, through their new Samsung Connect Home or other SmartThings hub device, other non-Samsung devices. While other companies do offer similar types of smart home hubs, none have the brand reach nor the installed base of branded devices that Samsung does.

As with any single-branded effort to dominate in the tech world, Samsung can’t possibly make a significant impact without reaching out proactively to other potential partners (and even competitors) on the device side in order to make its connected device platform viable. Still, because of its enormous footprint across so many aspects of households around the world, Samsung now possesses a bigger potential to become a disruptor in the platform war than its earlier OS-based efforts with Tizen might have suggested.

Podcast: Samsung S8, Dex, Bixby, Connect Home

In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss the recent Samsung Unpacked 17 launch event, which included the debut of the Samsung S8 smart phone, their Dex desktop docking station, Bixby assistant and Samsung Connect Home device.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Augmented Reality Finally Delivers on 3D Promise

The disillusionment was practically palpable. 3D—particularly in TVs—was supposed to be the saving grace of a supposedly dying category, and drive a new level of engagement and involvement with media content. Alas, it was not to be, and 3D TVs and movies remain little more than a footnote in entertainment history.

Not surprisingly, many people gave up on 3D overall as a result of this market failure, viewing the technology as little more than a gimmick.

However, we’re on the verge of a new type of 3D: one that serves as the baseline for augmented reality experiences and that, I believe, will finally deliver on the promise of what many people felt 3D could potentially offer.

The key difference is that, instead of trying to force a 3D world onto a 2D viewing plane, the next generation 3D enables the viewing of 3D objects in our naturally 3D world. Specifically, I’m referring to the combination of 3D cameras that can see and “understand” objects in the real world as being three-dimensional, along with 3D graphics that can be rendered and overlaid on this real-world view in a natural (and compelling) way. In other words, it’s a combination of 3D input and output, instead of just viewing an artificially rendered 3D output. While that difference may sound somewhat subtle in theory, in practice, it’s enormous. And, it’s at the very heart of what augmented reality is all about.

From the simple, but ongoing, popularity of Pokemon Go, through the first Google Tango-capable phone (Lenovo’s Phab 2 Pro), into notebooks equipped with 3D cameras, and ultimately leading to the hotly rumored next generation iPhone (whether that turns out to be iPhone 8 or 10 or something completely different remains to be seen), integrating 3D depth cameras with high-quality digital graphics into a seamless augmented reality experience is clearly the future of personal computing.[pullquote]Integrating 3D depth cameras with high-quality digital graphics into a seamless augmented reality experience is clearly the future of personal computing.[/pullquote]

The ability to have objects, data and, ultimately, intelligence injected into our literal view of the world is one of the most intellectually and physically compelling examples of how computing can improve our lives that has popped up in some time. Yet, that’s exactly what this new version of augmented 3D reality can bring.

Of course, arguably, Microsoft HoloLens was the first commercially available product to deliver on this vision. To this day, for those who have been fortunate enough to experience it, the technology, capabilities and opportunities that HoloLens enables are truly awe-inspiring. If Magic Leap moves its rumored AR headset beyond vaporware/fake demoware into a real product, then perhaps it too will demonstrate the undeniably compelling technological vision that augmented reality represents.

The key point, however, is that the integration of 3D digital objects into our 3D world is an incredibly powerful combination that will bring computing overall, and smartphones in particular, to a new level of capability, usefulness, and, well, just plain coolness. It will also drive the creation of the first significant new product category that the tech world has seen in some time—augmented reality headsets.

To be fair, initial shipment numbers for these AR headsets will likely be very small, due to costs, bulky sizes and other limitations, but the degree of unique usefulness that they will offer, will eventually make them a mainstream item.

The key technology that will enable this to happen are depth cameras. Intel was quick to recognize their importance and built a line of RealSense cameras that were initially designed for notebooks to do facial recognition several years back. With Tango, Google brought these types of cameras to smartphones, and as mentioned, Apple is rumored to be bringing these to the next generation iPhone in order to make their first stab at augmented reality.

The experience requires much more than just hardware, however, and that’s where the prospect of Apple doing some of their user interface (UI) software magic with depth cameras and AR could prove to be very interesting.

The concept of 3D has been an exciting one that the tech industry has arguably been chasing since the first 3D movies of the 1950s. However, only with the current and future iterations of this technology tightly woven into the enablement of augmented reality, will the industry be able to bring the kind of impact that many always hoped 3D could have.

Podcast: Apple iPads, Semiconductor Renaissance, Intel AI

In this week’s Tech.pinions podcast Tim Bajarin, Ben Bajarin and Bob O’Donnell discuss the new product announcements from Apple, new developments in the semiconductor market from companies like nVidia and ARM, and Intel’s recent announcement of a new AI organization.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Chip Magic

Sometimes, it just takes a challenge.

After years of predictable and, arguably modest, advances, we’re beginning to witness an explosion of exciting and important new developments in the sometimes obscure world of semiconductors—commonly known as chips.

Thanks to both a range of demanding new applications, such as Artificial Intelligence (AI), Natural Language Processing (NLP) and more, as well as a perceived threat to Moore’s Law (which has “guided” the semiconductor industry for over 50 years to a state of staggering capability and complexity), we’re starting to see an impressive range of new output from today’s silicon designers.

Entirely new chip designs, architectures and capabilities are coming from a wide array of key component players across the tech industry, including Intel, AMD, nVidia, Qualcomm, Micron and ARM, as well as internal efforts from companies like Apple, Samsung, Huawei, Google and Microsoft.

It’s a digital revival that many thought would never come. In fact, just a few years ago, there were many who were predicting the death, or at least serious weakening, of most major semiconductor players. Growth in many major hardware markets had started to slow, and there was a sense that improvements in semiconductor performance were reaching a point of diminishing returns, particularly in CPUs (central processing units), the most well-known type of chip.

The problem is, most people didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs. In addition, the overall system design of devices was being re-evaluated, with a particular focus on how to address bottlenecks between different components.[pullquote]People predicting the downfall of semiconductor makers didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs. [/pullquote]

Today, the result is an entirely fresh new perspective on how to design products and tackle challenging new applications through multi-part hybrid designs. These new designs leverage a variety of different types of semiconductor computing elements, including CPUs, GPUs (graphics processing units), FPGAs (field programmable gate arrays), DSPs (digital signal processors) and other specialized “accelerators” that are optimized to do specific tasks well. Not only are these new combinations proving to be powerful, we’re also starting to see important new improvements within the elements themselves.

For example, even in the traditional CPU world, AMD’s new Ryzen line underwent significant architectural design changes, resulting in large speed improvements over the company’s previous chips. In fact, they’re now back in direct performance competition with Intel—a position AMD has not been in for over a decade. AMD started with the enthusiast-focused R7 line of desktop chips, but just announced the sub-$300 R5, which will be available for use in mainstream desktop and all-in-one PCs starting in April.

nVidia has done a very impressive job of showing how much more than graphics its GPUs can do. From work on deep neural networks in data centers, through autonomous driving in cars, the unique ability of GPUs to perform enormous numbers of relatively simple calculations simultaneously is making them essential to a number of important new applications. One of nVidia’s latest developments is the Jetson TX2 board, which leverages one of their GPU cores, but is focused on doing data analysis and AI in embedded devices, such as robots, medical equipment, drones and more.

Not to be outdone, Intel, in conjunction with Micron, has developed an entirely new memory/storage technology called 3D Xpoint that works like a combination of DRAM—the working memory in devices—and flash storage, such as SSDs. Intel’s commercialized version of the technology, which took over 10 years to develop, is called Optane and will appear first in storage devices for data centers. What’s unique about Optane is that it addresses a performance bottleneck found in most all computing devices between memory and storage, and allows for performance advances for certain applications that will go way beyond what a faster CPU could do.

Qualcomm has proven to be very adept at combining multiple elements, including CPUs, GPUs, DSPs, modems and other elements into sophisticated SOCs (system on chip), such as the new Snapdragon 835 chip. While most of its work has been focused on smartphones to date, the capabilities of its multi-element designs make them well-suited for many other devices—including autonomous cars—as well as some of the most demanding new applications, such as AI.

The in-house efforts of Apple, Samsung, Huawei—and to some degree Microsoft and Google—are also focused towards these SOC designs. Each hopes to leverage the unique characteristics they build into their chips into distinct features and functions that can be incorporated into future devices.

Finally, the company that’s enabling many of these capabilities is ARM, the UK-based chip design house whose chip architectures (sold in the form of intellectual property, or IP) are at the heart of many (though not all) of the previously listed companies’ offerings. In fact, ARM just announced that over 100 billion chips based on their designs have shipped since the company started 21 years ago, with half of those coming in the last 4 years. The company’s latest advance is a new architecture they call DynamIQ that, for the first time, allows the combination of multiple different types and sizes of computing elements, or cores, inside one of their Cortex-A architecture chip designs. The real-world results include up to a 50x boost in AI performance and a wide range of multifunction chip designs that can be uniquely architected and suited for unique applications—in other words, the right kind of chips for the right kind of devices.

The net result of all these developments is an extremely vibrant semiconductor market with a much brighter future than was commonly expected just a few years ago. Even better, this new range of chips portends an intriguing new array of devices and services that can take advantage of these key advancements in what will be exciting and unexpected ways. It’s bound to be magical.

Computing on the Edge

It’s easy to fall into the trap these days that anything really important in tech only happens in the cloud. After all, that’s where all the excitement, investment and discussion seems to be. To be sure, there are indeed innumerable efforts to not only build software for the cloud, but also to use the cloud to completely reinvent companies or even industries.

As important as these cloud-based developments may be, however, they shouldn’t supercede many of the equally exciting capabilities being brought to life on the edge of today’s networks. While these endpoints, or edge devices, used to be limited to smartphones, PCs and tablets, there’s now an explosion of new options for creating, manipulating, viewing, analyzing and storing data. From VR headsets to smart digital assistants to intelligent tractors, the range of edge devices is enormous and shows no signs of slowing down anytime soon.

In addition, we’re starting to see the appearance of entirely new types of distributed computing architectures that can break up large workloads across different elements. Admittedly, some of this can get pretty messy fast, but suffice to say that many types of modern applications, such as voice-based computing, big (and little!) data analytics, factory automation, and real-time document collaboration tools all require the efforts and coordination of several different layers of computing, including pieces that live out on the edge.

On the industrial side of this work, there’s a relatively new industry group called the OpenFog Consortium—originally organized by companies like Cisco, ARM, Dell and Microsoft—that’s working to try and standardize some of these elements and how they can be used in these types of modern applications. The group gets its somewhat confusing name from the concept of applying cloud-like computing principles close to the ground (i.e., near the edge or endpoint)—similar to how clouds near the ground are perceived as fog in the real world.

In many fog computing applications, sensor data from an endpoint device or attached straight into a simple server-like computer (sometimes called a “gateway”) is acted upon by that gateway to trigger certain actions or perform certain types of tasks. After that, the data is also forwarded on up the chain to more powerful servers that typically do live in the cloud for advanced data analysis.

Probably the best example of an advanced edge computing element is a connected autonomous (or even semi-autonomous) car. Thanks to a combination of enormous amounts of sensor data, critical local processing power, and an equally essential need to connect back to more advanced data analysis tools in the cloud, autonomous cars are seen as the poster child of advanced edge computing. Throw in the wide range of different types of computing elements required for assisted or autonomous driving, and it’s easy to see why so many companies are making major acquisitions in this space. Intel’s announced plans to purchase MobileEye yesterday, for example, is just the latest in a string of key developments in this market and it’s not likely to be the last. MobileEye’s components will bring computer vision and other critical elements of connected car-based computing to Intel’s rapidly growing grab bag of complimentary technologies.[pullquote]Major tech companies all see connected cars as essentially ‘the’ computing device of the next decade or so, just as smartphones have been for the last decade.”[/pullquote]

Companies like Intel, nVidia, Qualcomm and ARM on the semiconductor side, as well as system integrators like Harman (recently purchased by Samsung) all see connected cars as essentially “the” computing device of the next decade or so, just as smartphones have been for the last decade. That’s another reason why there’s so much excitement—and so many battles looming—in and around car tech. Add in the carriers, network providers, car OEMs, other Tier one suppliers, and a raft of startups and the stage is set for an intricate and complex competitive dance for years to come.

While it’s tempting to long for the simpler days of computing devices, where everything occurred or locally, or even a pure cloud-based world, where everything happens in remote data centers, the simple truth is today’s advanced applications require much more sophisticated hybrid designs. Building out a cloud-based infrastructure and cloud-based software tools was a critical step along this computing evolution chain, but it’s clear that the most interesting and exciting developments moving forward are going to be pushing advanced computing out onto the edge.

Cars Need Digital Safety Standards Too

When it comes to using your digital devices, physical safety is probably one of the few things you don’t have to worry about. Sure, the occasional overheating battery can be a problem, but generally speaking, you don’t see a whole lot of need, nor requests, for detailed safety requirements for digital gadgets.

In the automotive world, on the other hand, there is an enormous range of different safety standards and requirements that must be met before a particular vehicle can even be sold, let alone used. The Federal Motor Vehicle Safety Standards (FMVSS), which are developed and enforced by the National Highway Transportation Safety Administration (NHTSA), for example, include hundreds of detailed requirements that automakers must meet in order for a car to be eligible for sale in the US.

Importantly, these rules are intended to help maintain the safety of passengers inside the vehicle, as well as pedestrians and other people near the vehicle (such as passengers in other cars).

As cars continue to evolve, they are morphing into the most sophisticated digital devices we own (or at least use), yet unlike most electronic devices, they still represent an enormous potential safety hazard to both people and property. So, do we need to start outlining safety and security standards for the specific digital components of modern vehicles? Given the car hacking incidents that have already occurred, and the concerns about the potential for even worse ones, it seems the obvious answer is yes.[pullquote]Given the car hacking incidents that have already occurred, and the concerns about the potential for even worse ones, it seems the obvious answer to a question about the need for automotive digital safety standards is yes.”[/pullquote]

To its credit, much of the automotive industry does follow a functional safety standard for vehicle electronics called ISO 26262. Developed by the International Organization for Standardization (referred to as ISO), the standard incorporates a number of guidelines for how different electronic subsystems (both hardware and software) should work on their own, and along with other subsystems in the vehicle. In addition, ISO 26262 outlines four Automotive Safety Integrity Levels (ASIL) that rank these systems on their potential risk level, from the lowest at ASIL Level A to the highest at ASIL Level D.

As robust a mechanism as these standards may appear to represent, however, they don’t necessarily take digital security issues into account. For example, there’s no standard way to ensure the integrity of “over-the-air” upgrades to the incredibly complex software that is now found in today’s cars. While very few carmakers are currently doing software upgrades to their vehicles (unfortunately), that will undoubtedly change soon. In addition, as we start to see more advanced data and services being delivered both to and from the car thanks to technologies like 5G networks, there will be a critical need for ensuring the integrity of those communications.

Many advanced assisted and autonomous driving features also require the coordination of multiple different subsystems within a vehicle, but there aren’t sufficient standards to ensure that those in-car communications aren’t compromised in any way either.

Admittedly, like trying to develop a security standard for IoT devices overall, creating digital security requirements for cars is no easy task. One major challenge, for example, will be to determine exactly which parts of an automotive digital security solution would need to be required, and which parts may simply be recommended (and, therefore, open to a variety interpretations by different car or component makers). The risk factor on hacked cars is so high, it’s essential that the work be done though. In fact, I wouldn’t be terribly surprised to see federal or state legislation that starts to demand certain automotive security requirements be met before more advanced cars can be sold.

The physical safety standards for cars have been around for 50 years and are a widely accepted and essential part of the automotive industry. What needs to happen now is a similar level of effort and acceptance on standardizing the safety and security-related digital components at the heart of today’s modern vehicles.

The Messy Path to 5G

It’s the hottest topic in the tech world—certainly at this week’s Mobile World Congress (MWC) trade show in Barcelona, Spain—it is. The next generation of wireless networks—commonly called 5G—is on the lips of most people you speak to, in the majority of press announcements, and mentioned in just about every meeting in which I’ve participated.

5G is critically important for several reasons. Not only will it offer major speed increases (up to multiple gigabits/second versus current averages of around 15-20 mbits/second), it will also reduce the delays, or latency, in network traffic, and it will dramatically increase the density and reliability of wireless networks.

The problem is, the messages being shared regarding this important new technology are quite different; in some cases, diametrically so. Not only the basic concepts, but the time lines, the capabilities, and the overall relevance. The result is a lot of confusion with regard to what exactly is coming, and when and what it all means.

While it’s easy to blame the problem on the hype surrounding 5G—and there’s unquestionably a lot of that—there are important and reasonably valid reasons for some of this confusion. In each case, the situation basically boils down to several different perspectives from different industry participants being mixed together into a jumbled mess.

First, it’s important to remember that there are two key paths towards 5G, the network path—being touted by key network equipment providers like Ericsson, Huawei and Nokia, as well many major carriers—and the component and device path—driven by Qualcomm, Intel and others. As with any broadband technology, there’s a key chicken vs. egg discussion when it comes to 5G. You have to have the network support in order to enable devices, but you need enough devices (or at least the promise of them), to justify the enormous investment in core networking equipment required.

Not surprisingly, the big network companies are pushing 5G hardest, but carriers (particularly Verizon) are also starting to make a great deal of noise about the “imminent” arrival of 5G trials. Of course, trials are not the same thing as widespread deployment that we can all use, but some believe that even doing trials is a bit risky at this point because the final 5G technical standards have not been ratified. The key technologies expected to be part of the final 5G standard, including things such as millimeter wave radios, are extremely difficult to implement, particularly on a widespread basis, so there’s still lots of testing and other work to be done.

Another key difference of opinion on 5G is more philosophical: the idealists versus the realists. Driven in part by some of these engineering challenges, as well as basic business model issues, it’s easy to find yourself on different sides of the “5G is nearly here” debate, depending on whom you speak with. The more optimistic side is talking about wider availability in 2018 or 2019, while those who consider themselves more realistic are saying it will be 2020 before 5G is something that we’ll actually be able to use. Some of these dates are being driven by key events: both the 2018 Winter Olympics in South Korea and the 2020 Summer Olympics in Japan are expected to be critical milestones for wider testing/adoption of 5G.

The third and final reason for differences on 5G timing and relevance stems from the reality that incremental improvements in network performance and capabilities occur at numerous points in between the roughly once a decade jump in telecom network generations. As T-Mobile CTO Neville Ray mentioned in a conversation we had here, it’s best to think of a concept like ten steps along the way from 4G and 5G. For both technical and practical reasons, there isn’t a single major jump that occurs when a transition from 4G to 5G occurs. (This was also the case between 3G and 4G.) Instead, a number of key capabilities that are typically perceived as part of the next generation are added to the current generation as one of these “steps.” For example, while gigabit/second level speeds are commonly touted as a key characteristic of 5G, at this week’s MWC, there were a number of announcements around Gigabit LTE—that is, gigabit/second speeds on 4G.[pullquote]While many may cringe at the thought, we could start to see companies talking about 4.5G or super 4G or early 5G or some other variation.[/pullquote]

Qualcomm’s new X16, X20 and X50 family modems, as well as Intel’s XMM7560 modem, for example, offer the promise of Gigabit speeds much sooner, and Sony’s new Xperia XZ Premium—powered by the new Snapdragon 835 from Qualcomm, which incorporates the X16 modem—is the first smartphone with a Gigabit LTE-capable modem.

As a result of this (and all the previous perspective differences), it wouldn’t be terribly surprising if we start to see some companies talking about new kinds of “intermediate” technology or even network names. While many may cringe at the thought, we could start to see companies talking about 4.5G or super 4G or early 5G or some other variation. While I can understand the desire to differentiate in a crowded market, these short-term marketing efforts will undoubtedly just complicate the issue, and confuse consumers even more than this transition is likely to do anyway.

It’s clear that none of the various participants in the 5G value chain are trying to create confusion. They genuinely believe in their view of how and when their pieces will arrive, and what they can do. However, without recognizing the many varied perspectives that they each bring to the table, it’s very difficult to understand how all these pieces will fit together. As a result, I have a feeling the path to 5G is going to be a lot messier than many would like.

Rethinking Wearable Computing

Bring up the topic of wearables these days, and you’re likely to see rolled eyes, shrugged shoulders, and a general sense of “whatever.” The problem, of course, is that wearables were badly overhyped and haven’t even come close to living up to the expectations that many companies, analysts, and industry observers had for the category.

Sales in many of the most closely watched sub-categories, notably smartwatches, have not been anywhere near the level that would make them “the next big thing.” Sure, you could argue a few companies have done OK, but the short attention span of the tech industry has clearly been diverted to newer, sexier devices, like voice-controlled speakers, or AR and VR headsets.

Despite these issues, it may be that we’ve given up on wearables a bit too soon. The problem is that we’re thinking much too narrowly about what the concept, and implementation, of wearable computing really is. To be clear, I don’t see a big future for the individual products that we currently count as wearables, but I think the idea of several linked components that work together as a wearable computing system could have legs.

Imagine, for example, a combination of something you wear on your wrist, something you wear on your face, perhaps a foldable screen you carry in your pocket, along with a set of intelligent earbuds (which might be integrated into the glasses you wear on your face), all of which work together seamlessly.

The devices would each incorporate sensors and/or cameras that would enable real-world contextual information. They would all incorporate high-speed wireless connections, and the entire system would be reliably voice-controlled with an AI-powered digital assistant. Critically, I think a solution like this would need to be sold together as a system—though a componentized system might work as well.

Admittedly, there are some inherent challenges in a concept like this. It’s hard enough for people to always remember to carry their smartphones, so thinking that they’ll regularly walk around with 3 or 4 devices seems like a stretch. Remember, however, that certain elements of these solutions could eventually get integrated into other currently non-technical components of our lives, such as our clothing. Start thinking that way, and some of the concepts may not be quite so far-fetched.[pullquote]Arguably, what I’m really talking about is the next evolution beyond smartphones into a highly personalized, but much less visible form of personal computing.[/pullquote]

Arguably, what I’m really talking about is the next evolution beyond smartphones into a highly personalized, but much less visible form of personal computing. Given that I don’t think people are too eager to give up their smartphones yet, this connected wearable computing vision is still clearly a ways off—maybe even as much as 8-10 years. Nevertheless, if we start to formulate a goal for where computing is headed, we can more easily envision that path from our present to the future. More importantly, we can start thinking more clearly about potential stops—or product concepts and iterations—along the way.

At some point, wearable computing devices or solutions or whatever form they end up taking will be part of our lives. Of that, I am convinced. But in order to start moving towards that future vision, we need get past the broken, highly separated wearable product categories of today and start thinking about a more integrated wearable solution for tomorrow.

Podcast: Samsung Arrest, iPhone AR, Apple Content, Facebook Manifesto

In this week’s Tech.pinions podcast Tim Bajarin and Bob O’Donnell discuss the recent arrest of the Samsung heir apparent, chat about iPhone rumors and Apple’s potential plans for Augmented Reality, debate Apple’s efforts in creating original content, and discuss the implications of Mark Zuckerberg’s massive tome on social media in the modern era.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Modern Workplaces Still More Vision Than Reality

We’ve all seen the images. Happy young employees, working productively in open-air workspaces, easily collaborating with co-workers and outside colleagues all-over the world, utilizing persistent chat tools like Slack to keep on top of all their latest projects and other efforts. It sounds great, and in a few places in Silicon Valley, things do work that way—at least in theory.

But at most companies in the US (and likely the rest of the world), well, not so much. It’s not that companies aren’t looking at or starting to use some of these new communication and collaboration technologies. Some are, but the deployment levels are low at less than 30% overall; plus, employee habits haven’t really changed in many places.

Such are the results from the latest study on workplace trends completed by TECHnalysis Research. The study is based on a survey of 1,001 US-based working adults aged 18-74 at medium (100-999 employees) and large (1,000+ employees) companies across a range of industries. The survey goal was to understand how the modern workplace is evolving in terms of how and where people work, as well as the hardware, software, services and capabilities that employees expect from their employers.

I wrote about some of the surprising results regarding work habits and locations in a previous column called “The Workplace of the Future” but for this column I’m going to focus on some of the big picture implications of the research, as well as some technology-specific trends.

The key takeaway is that both technologies and habits rooted in the 20th century are keeping the 21st century vision of the modern workplace from becoming reality. For example, despite the appearance of modern communications and collaboration tools, it’s the “old school” methods of emails, phone calls and texts that make up 75% of all communications with co-workers. There are certainly some differences based on the age of the employee, but even for workers under 45, the number is 71% (emails and voice calls make up 58% for that age group).

From a device perspective, the most common tool by far is not a smartphone, but a company-owned desktop PC, which is used for just under half (48%) of all device-related work. (For the record, personally owned smartphones are only used for 7.5% of total work on average.) Partially as a result, some version of Windows is used for rougly 2/3 (65%) of all work, with Android at 11%, iOS at 10%, and the rest split among cloud-based platforms, Macs, Linux and other alternative options. Arguably, that is a drop from the days when Windows owned 90%+, but it still shows how dominant Microsoft is in the workplace.

Open air environments have received a great deal of attention and focus in modern workplaces, but there’s a potential gremlin in that future work vision: noise. In fact, in about 25% of outside the office alternative or shared workspaces (such as WeWork) and in 20% of inside the office alternative or shared workspaces, noise was cited as having a serious impact on productivity. Given these numbers, it’s not terribly surprising to see reports suggesting that some of these experiments in workplace flexibility are not working out as well as hoped.

From a conference room perspective, basic audioconferencing, guest WiFi, and wireless access to projectors (or other displays) are the most widely available services, but when asked which of these capabilities offers the greatest quality and utility, the story was very different. Modern tools such as HD videoconferencing, large interactive screens (a la Microsoft’s Surface Hub), electronic whiteboards, and dedicated computing devices designed to ease meeting collaboration(such as HP’s new Elite Slice, based on Intel’s Unite platform), scored the highest satisfaction levels, despite their currently low levels of usage. In other words, companies who invest in modern collaboration tools are likely to find higher usage and appreciation for those devices.[pullquote]Companies who invest in modern collaboration tools are likely to find higher usage and appreciation for those devices.”[/pullquote]

From a software perspective, it seems that old habits die hard. Emailing documents back and forth is still the most common methold of collaboration with co-workers at 35%, while the usage of cloud-based storage services is only 8% with co-workers and 7% with colleagues from other organizations. Similarly, real-time document collaboration tools, such as Microsoft’s Office 365 and Google Docs, which have now been available for several years, are only used with co-workers for collaboration purposes by 19% of respondents.

Modern forms of security, such as biometrics, are another key part of the ideal future workplace vision. In current-day reality, though, biometric security methods are only used 15% of the time for corporate data, 14% for physical facilities, and 12% for access to either corporate-owned or personally owned devices. Surprisingly, 41% of respondents said their company does not have any security policy for personal owned devices—yet those personal devices are used to complete 25% of the device-based work that they do. No wonder security issues at many organizations are a serious concern.

The tools and technologies are already available to deliver on a highly optimized, highly productive workplace of the future, but, as the survey results show, there’s still a long way to go before that vision becomes reality.

(If you’d like to dig a bit deeper, a free copy of a few survey highlights is available to download in PDF format here.)

Podcast: Android Wear 2.0 Smartwatches, Android-Enabled Chromebooks, Oculus-Best Buy

In this week’s Tech.pinions podcast Ben Bajarin and Bob O’Donnell discuss the release of Android Wear 2.0-based smartwatches and the state of the overall wearable industry, analyze the potential impact of forthcoming Chromebooks from Samsung and others that directly support Android apps, and debate what the closing of hundreds of Oculus VR demo stations at Best Buy stores means for the VR market.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Missing Map from Silicon Valley to Main Street

Regardless of where you sit on the political spectrum, the maelstrom created by the last US presidential election uncovered a painful reality for the tech industry: a striking gap between it and much of mainstream America.

It’s not that Americans of all socioeconomic levels aren’t using the output of the tech industry. From smartphones to social media and PCs to online shopping, US citizens are, of course, voracious consumers of all things tech.

The problem is a serious lack of empathy and understanding from people who work within the tech industry to those outside their rarified milieu. To its credit, the tech industry has created enormous amounts of wealth and many high-paying jobs. Very few of those jobs, however, are relevant or available to a large swath of the US population. While I haven’t seen any official breakdowns, I’m not aware of many middle income jobs (according to US Census statistics, the average US family income in 2015 was $55,755) in the tech industry. Heck, interns at big tech companies often get paid more than that.

Not surprisingly, that kind of income disparity is bound to create some resentment. Yes, on the one hand, the significantly higher salaries often found in tech jobs do make the goal of working in tech an attractive one for many who aspire to break into the field. But not everyone can (nor wants to) work in tech.

A functioning society, of course, requires people to work across a range of jobs and at a range of income levels. But, it does seem rather disconcerting that an industry that is responsible for driving so much growth across the economy, and that houses the most well-known and well-respected brands in the world, does so little to employ people at mainstream income levels. For all of its focus on social justice and other progressive concerns, the tech industry displays a rather shocking lack of interest in economic inclusivity, which is arguably at the very heart of a just society.[pullquote]For all of its focus on social justice and other progressive concerns, the tech industry displays a rather shocking lack of interest in economic inclusivity, which is arguably at the very heart of a just society.”[/pullquote]

Of course, fixing the problem isn’t easy. But it does seem like there are a few basic ideas that could help and a lot more “thinking different” that might be worth a try. For one thing, the fact that the tech industry notoriously outsources (or subcontracts) nearly every lower and middle-income job to another firm (all in the name of cost-cutting) needs to be re-examined. From bus drivers, to janitorial and security staff to, yes, manufacturing jobs, it’s high time to start making people who do work for a company, employees of that company, with all the rights and benefits that entails. Yes, it could negatively impact the bottom line (though, in the big scheme of things, not by very much), but it would be a tremendously positive step for many. All it takes is some fiscal stamina and a bit of guts.

In addition, the whole mindset of gig-based companies (such as Uber) needs to be reconsidered. Maybe the original intentions for generating a bit of extra income were good, but when millions of people start trying to build their lives around pay-for-hire work, it’s time to start making them the middle-income employees they’ve earned the right to be.

It’s also time to start thinking about packaging and selling technology-driven products in entirely new ways. There might be ways to start building entire new sub-economies around, for example, helping farmers grow their crops more efficiently through the use of sensors and other IoT-based technologies. In addition, building products or services that allow the creation of small businesses, such as a tech franchise, which could help other local small businesses with their tech devices and software. For example, someone who could help local bakers, restaurants, florists or shoe repair shops to run their businesses a bit more efficiently, but provides “door-to-door” service.

Part of the problem is that the tech industry has become so obsessed with only offering the latest, most feature-rich products and services through high-income jobs that they have lost sight of the fact that some people only need very simple “older” tech that could be delivered in a more modest manner through comparatively lower-paying jobs.

Rather than planning for a societal collapse, it’s time to start mapping out a more positive, productive future that links Silicon Valley to Main Street in a useful, meaningful way.

The Network vs. The Computer

The history of the technology industry has seen several swings back and forth between dependence on a network that can deliver the output of centralized computing resources, to client devices that do most of the computing work on their own.

As we start to head towards the Gigabit LTE and then 5G era, when increasingly fast wide-area wireless networks make access to massive cloud-based computing resources significantly easier, there’s a fundamental question that must be asked. Do we still need powerful client devices?

Having just witnessed a demo of Gigabit LTE put on by Australian telco carrier Telstra, along with network equipment provider Ericsson, mobile router maker Netgear, and modem maker Qualcomm, the question is becoming increasingly relevant. Thanks to advancements in network and modem (specifically Category 16 LTE) technologies, the group demonstrated broadband download speeds of over 900 Mb/s (conveniently rounded up to 1 Gb/s) that Telstra will officially unveil in two weeks. Best of all, Gigabit LTE is expected to come to over 15 carriers around the world (including several in the US) before the end of 2017.[pullquote]Gigabit LTE is expected to come to over 15 carriers around the world (including several in the US) before the end of 2017.[/pullquote]

Looking forward, the promise of 5G is not only these faster download speeds, but also nearly instantaneous (1 millisecond) response times. This latter point, referred to as ultra low latency, is critical for understanding the real potential impact of future network technology developments like 5G. Even today, the lack of completely consistent, reliable network speeds is a key reason why we continue to need (and use) an array of devices with a great deal of local computing power.

Sure, today’s 4G and WiFi networks can be very fast and work well for many applications, but there’s isn’t the kind of time-sensitive prioritization of the data on the networks to allow them to be completely relied on for mission critical applications. Plus, overloaded networks and other fairly common challenges to connectivity lead to the kinds of buffering, stuttering and other problems with which we are all quite familiar. If 5G can live up to its promise, however, very fast and very consistent network performance with little to no latency will allow it to be reliably used for applications like autonomous driving, where milliseconds could mean lives.

In fact, the speed and consistency of 5G could essentially turn cloud-based datacenters into the equivalent of directly-attached computing peripherals to our devices. Some of the throughput numbers from Gigabit LTE are now starting to match that of accessing local storage over an internal device connection, believe it or not. In other words, with these kinds of connection speeds, it’s essentially possible to make the cloud local.[pullquote]The speed and consistency of 5G could essentially turn cloud-based datacenters into the equivalent of directly-attached computing peripherals to our devices. [/pullquote]

Given that the amount of computing power in these cloud-based datacenters will always dwarf what’s available in any given device, the question again arises, what happens to client devices? Can they be dramatically simplified into what’s called a “thin client” that does little more than display the results of what the cloud-based datacenters generate?

As logical as that may at first sound, history has shown that it’s never quite that simple. Certainly, in some environments and for some applications, that model has a great deal of promise. Just as we continue to see some companies use thin clients in place of PCs for things like call centers, remote workers and other similar environments, so too, will we see certain applications where the need for local computing horsepower is very low.

In fact, smart speakers like the Amazon Echo and Google Home are modern-day thin clients that do very little computing locally and depend almost completely on a speedy network connection to a cloud-based datacenter to do their work.

When you start to dig a bit deeper into how these devices work, however, you start to realize why the notion of powerful computing clients will not only continue to exist, but likely even expand in the era of Gigabit LTE, 5G and even faster WiFi networks. In the case of something like an Echo, there are several tasks that must be done locally before any requests are sent to the cloud. First, you have to signify that you want it to listen, and then the audio needs to go through a pre-processing “cleanup” that helps ensure a more accurate response to what you’ve said.

Over time, those local steps are likely to increase, placing more demands on the local device. For example, having the ability to recognize who is speaking (speaker dependence) is a critical capability that will likely occur on the device. In addition, the ability to perform certain tasks without needing to access a network (such as locally controlling devices within your home), will drive demand for more local computing capability, particularly for AI-type applications like the natural language processing used by these devices.

AI-based computing requirements across several different applications, in fact, are likely going to drive computing demands on client devices for some time to come. From autonomous or assisted driving features on cars, to digital personal assistants on smartphones and PCs, the future will be filled with AI-based features across all our devices. Right now, most of the attention around AI has been in the datacenter because of the enormous computing requirements that it entails. Eventually, though, the ability to run more AI-based algorithms locally, a process often called inferencing, will be essential. Even more demanding tasks to build those algorithms, often called deep learning or machine learning, will continue to run in the data center. The results of those efforts will lead to the creation of more advanced inferencing algorithms, which can then be sent down to the local device in a virtuous cycle of AI development.

Admittedly, it can get a bit complicated to think through all of this, but the bottom line is that a future driven by a combination of fast networks and powerful computing devices working together offers the potential for some amazing applications. Early tech pioneer John Gage of Sun Microsystems famously argued that the network is the computer, but it increasingly looks like the computer is really the network and the sum of its connected powerful parts.

Podcast: Tech Earnings for Alphabet, Microsoft and Intel

In this week’s Tech.pinions podcast Ben Bajarin and Bob O’Donnell discuss earnings reports from Alphabet (Google), Microsoft, Intel and what they say about the future of tech industry.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Voice Drives New Software Paradigm

A great deal has been written recently on the growing importance of voice-driven computing devices, such as Amazon’s Echo, Google Home and others like it. At the same time, there’s been a long-held belief by many in the industry that software innovations will be the key drivers in moving the tech industry forward (“software eats the world”—as VC Marc Andreesen famously touted over 5 years ago).

The combination of these two—software for voice-based computing—would, therefore, seem to be at the very apex of tech industry developments. Indeed, there are many companies now doing cutting-edge work to create new types of software for these very different kinds of computing devices.

The problem is, expectations for this kind of software seems to be quickly surpassing reality. Just this week, in fact, there were several intriguing stories related to a new study which found that usage and retention rates were very low for add-on Alexa “skills”, and similar voice-based apps for the Google Assistant platform running inside Google Home.

Essentially, the takeaway from the study was that outside of the core functionality of what was included in the device, very few new add-on apps showed much potential. The implication, of course, is that maybe voice-based computing isn’t such a great opportunity after all.

While it’s easy to see how people could come to that conclusion, I believe it’s based on an incorrect way of looking at the results and thinking about the potential for these devices. The real problem is that people are trying to apply the business model and perspective of writing apps for mobile phones to these new kinds of devices. In this new world of voice-driven computing, that model will not work.

Of course, it’s common for people to apply old rules to new situations; that’s the easy way to do it. Remember, there was a time in the early days of smartphones when people didn’t really grasp the idea of mobile apps, because they were used to the large, monolithic applications that were found on PCs. Running big applications on tiny screens with what, at the time, were very underpowered mobile CPUs, didn’t make much sense.

In a conceptually similar way, we need to realize that smart speakers and other voice-driven computing devices are not just smartphones without a screen—they are very different animals with very different types of software requirements. Not all of these requirements are entirely clear yet—that’s the fun of trying to figure out what a new type of computing paradigm brings with it—but it shouldn’t be surprising to anyone that people aren’t going to proactively seek out software add-ons that don’t offer incredibly obvious value.

Plus, without the benefit of a screen, people can’t remember too wide a range of keywords to “trigger” these applications. Common sense suggests that the total possible number of “skills” that can be added to a device is going to be extremely limited. Finally, and probably most importantly, the whole idea of adding applications to a voice-based personal assistant is a difficult thing for many people to grasp. After all, the whole concept of an intelligent assistant is that you should be able to converse with it and it should understand what you request. The concept of “filling in holes” in its understanding (or even personality!) is going to be a tough one to overcome. People want a voice-based interaction to be natural and to work. Period. The company that can best succeed on that front will have a clear advantage.

Despite these concerns, that doesn’t mean the opportunity for voice-based computing devices will be small, but it probably does mean there won’t be a very large “skills” economy. Most of the capability is going to have to be delivered by the core device provider and most of the opportunity for revenue-generating services will likely come from the same company. In other words, owning the platform is going to be much more important for these devices than it was for smartphones, and companies need to think (and plan) accordingly.[pullquote]Existing business models and existing means for understanding the role that technologies play don’t always transfer to new environments, and new rules for voice-based computing still need to be developed.”[/pullquote]

That doesn’t mean there isn’t any opportunity for add-ons, however. Key services like music streaming, on-demand economy requests, and voice-based usage or configuration of key smart home hardware add-ons, for example, all seem like clearly valuable and viable capabilities that customers will be willing to add on to their devices. In each of those cases, it’s also important to realize that the software isn’t likely going to represent a revenue opportunity of its own; simply a means of accessing an existing service or piece of hardware.

New types of computing models take years to really enter the mainstream, and we’re arguably still in the early innings when it comes to voice-driven interfaces. But, it’s important to realize that existing business models and existing means for understanding the role that technologies play don’t always transfer to new environments, and new rules for voice-based computing still need to be developed.

Inside the Mind of a Hacker

Writing about security is kind of like writing about insurance. As a responsible adult, you know it’s something you should do every now then, but deep down, you’re really worried that many readers won’t make it past the second sentence. (I hope you’re still here.)

Having recently had the privilege of moderating a panel entitled “Inside the Mind of a Hacker” at the CyberSecurity Forum event that occurred as part of CES, however, I’ve decided it’s time. The panel was loaded with four smart and opinionated security professionals who hotly debated a variety of topics related to security and hacking.

Speaking to the theme of the panel, it became immediately clear that the motivations for the “bad guy” hackers (there was, of course, a brief, but strong show of support for the white hat “good” hackers) are exactly what you’d expect them to be: money, politics, pride, power and revenge.

Beyond some of the basics, however, I was surprised to hear the amount of dissent on the topics discussed, even by those with some impressive credentials (including work at the NSA, managing cyber intelligence for Fortune 500 companies and government agencies, etc.). One particularly interesting point, for example, highlighted that hackers are people too—meaning, they make mistakes. In fact, thankfully, apparently quite a lot of them. While in retrospect that seems rather obvious, given the aura of invincibility commonly attributed to hackers through popular media, it wasn’t something I expected to hear.

Another key point was the methodology used by most hackers. Most agreed that the top threat is from phishing attacks, where employees at a company or individuals at home are lured into opening an attachment or clicking on a link that triggers a series of, well, unfortunate events. Even with up-to-date anti-malware software and security-enhanced browsers, virtually everyone (and every company) is vulnerable to these increasingly sophisticated and tricky attacks. However, several panelists pointed out that too much attention is spent trying to remedy the bad situations created by phishing attacks, instead of educating people about how to avoid them in the first place.

Looking forward, the rapid growth of ransomware, when companies or individuals are locked out of their systems and/or data until a ransom is paid to unlock it, was one of the panelists’ biggest concerns. Attacks of this sort are growing quickly and most believe the problem will get much worse in 2017. In many cases, organized crime is behind these types of incidents, and with the popularity of demanding payment in bitcoin or other payment methods that are nearly impossible to trace, the issue is very challenging.

Another concern the panel tackled was security issues for Internet of Things (IoT) devices. Many companies getting involved with IoT have little to no security experience or knowledge and that’s led to some gaping security holes that automated hacking tools are quick to find and exploit. Thankfully, the group agreed there is some progress happening here with newer IoT devices, but given the wide range of products already in market, this problem will be with us for some time. One potential solution that was discussed was the idea of an IoT security standard (along the lines of a UL approval), which is a topic I wrote about several months back. (See “It’s Time for an IoT Security Standard”)[pullquote]There are few if any things that can be completely blocked from hacking efforts, but huge progress could be made in cyber security if companies and people would just start actually using some of the tools already available.”[/pullquote]

Another potential benefit could come from improved implementations of biometric authentication, such as fingerprint and iris scans, as well as leveraging what are commonly called “hardware roots of trust.” Essentially, this provides a kind of digital ID that can be used to verify the authenticity of a device, just as biometrics can help verify the authenticity of an individual. Both of these concepts enable more active use of multi-factor authentication, which can greatly strengthen security efforts when combined with encryption, stronger security software perimeters, and other common sense guidelines.

As the panel was quick to point out, there are few if any things that can be completely blocked from hacking efforts. Nevertheless, huge progress could be made in cyber security if companies and people would just start actually using some of the tools already available. Instead of worrying about solving the toughest corner cases, good security needs to start with the basics and build from there.

Podcast: CES 2017 and Detroit Auto Show Autonomous Cars

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss car technology announcements from CES 2017 and the NAIA 2017 Detroit Auto Show, as well as the regulatory, technical and business challenges facing autonomous cars.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Takeaways from CES 2017

By now you’ve undoubtedly read or viewed several different CES stories across a wide range of publications and media sites. So, there’s no need to rehash the details about all the cool, crazy, or just plain interesting new products that were introduced at or around this year’s show.

But it usually takes a few days to think through the potential impact of what these announcements mean from a big picture perspective. Having spent time doing that, here are some thoughts.

The impact of technology on nearly all aspects of our lives continues to grow. Yes, I realize that seems somewhat obvious, but to actually see or at least read about the enormous range of products and services on display at this year’s show makes what is typically just a conceptual observation, very real. From food to sleep to shelter to work to entertainment (of all kinds!) to health to transportation and beyond, it’s nearly impossible to imagine an activity that humans engage in that wasn’t somehow addressed at this year’s show. Having attended approximately half of the 50 CES shows that have now occurred, the expanding breadth of the show never ceases to amaze me. In a related way, the range of companies that are now participating in some way, shape, or form is surprisingly diverse (and will only increase over time).

Software is essential, but hardware still matters. At the end of the day, it’s the experience with a product that ultimately determines its success or failure. However, when you’re surrounded by the products and services that will drive the tech industry’s agenda for the next 12 months, it’s immediately clear that hardware plays an enormously critical role. From subtle distinctions like the look and feel of materials, to the introduction of entirely new types of tech products, the importance of hardware devices and key hardware components continues to grow, not shrink (as some have suggested).

What’s old can be new again. Though TVs and PCs may sound like products from a different era to some, this year’s show once again proved that the right technological developments combined with human ingenuity can produce some very compelling new products. Even long-forgotten technologies like front projection can be transformed in ways that make them very intriguing once again. Plus, it’s becoming increasingly clear that, just like the fashion and music industries, the tech industry is developing a love affair with retro trends. From vinyl to Game Boys and beyond, it seems many types of older tech are going to be revisited and renewed.

We are on the cusp of some of the biggest changes in technology that we’ve seen in some time. The integration of “invisible” technologies that we can’t directly see but still interact with is going to drive some of the most profound developments, power shifts, and overall restructuring that’s ever occurred in the tech industry. Oh, and it’ll make for some incredibly useful and compelling new product experiences too.[pullquote]The integration of “invisible” technologies that we can’t directly see but still interact with is going to drive some of the most profound developments, power shifts, and overall restructuring that’s ever occurred in the tech industry.”[/pullquote]

Voice-control will certainly be part of this, but there will be much more. In fact, the range of new products and services, as well as enhancements and recreations of existing products and services that AI, deep learning, and other advanced types of software technologies can enable in combination with sensors, connectivity, and powerful distributing computing is going to be transformational. Sure, there’s been talk of adding intelligence to everything for quite some time, but many of the announcements from this year’s CES demonstrate that this promise is now becoming real.

Finally, trade shows still matter, even in tech. Yes, virtual reality may one day provide us with the freedom to avoid the crowds, hassles, and frustrations of trekking to an alternative location, and seemingly everyone who goes likes to complain about attending CES, but there’s nothing quite like being there. From serendipitous run-ins with industry contacts, to seeing how others react to products and technologies you find interesting, there are lots of reasons why it’s going to be difficult to completely virtualize a trade show for some time to come.