Podcast: Apple 2018 Worldwide Developer’s Conference

This week’s Tech.pinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell analyzing Apple’s WWDC event in great detail, including new announcements around iOS12, WatchOS5 and MacOS Mojave.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Qualcomm Announces New Snapdragon for PCs, Kills Partners’ Near-Term Prospects

At this week’s Computex show in Taiwan Qualcomm announced the next generation of silicon for the Windows on Snapdragon platform. The new chip is called the Snapdragon 850, and rather than simply repurposing an existing high-end smartphone processor the company has cooked up a modified chip specifically for the Windows PC market. Qualcomm says the new chip will provide a 30 percent system-wide performance boost over the previous generation. I’m pleased to see Qualcomm pushing forward here, as this area will eventually evolve into a crucial piece of the PC market. However, announcing it now, with an eye toward new products appearing by year’s end, puts its existing hardware partners in a very tough spot.

Tough Reviews, And a Short Runway
Qualcomm and Microsoft officially launched the Windows 10 PCs powered by the Snapdragon Mobile PC Platform in December 2017. The promise: By using the Snapdragon 835 processor and related radios, Windows notebook and detachable products would offer instant startup, extremely long battery life, and a constant connection via LTE. Initial PC partners included HP, Lenovo, and ASUS.

Reviews of the three initial products have been mixed at best, with many reviewers complaining about slow performance, driver challenges, and app compatibility. But most also acknowledge the benefits of smartphone-like instant on, the luxury of connectivity beyond WiFi, and battery runtimes measured in days versus hours. I’d argue that the technical issues of rolling out a new platform like this were unavoidable. However, the larger self-inflicted wound here was that nobody did a great job of articulating who these products would best serve. This fundamental issue led to some head-scratching price points and confused marketing. I talked about the missed opportunity around commercial users back in December.

There was also the issue of product availability. While the vendors announced their products back in December, shipments didn’t start until 2018. In fact, while HP’s $1,000 Envy X2 started shipping in March, neither Lenovo’s $900 Miix 630 nor ASUS’s $700 NovaGo TP370QL is widely available even today. Amazon recently launched a landing page dedicated to the Always-Connected Windows 10 PC with a bundled option for free data from Sprint for the rest of 2018. The ASUS product moved from pre-order to available on June 7; Lenovo’s product still has a pre-order button that says it will launch June 27th.

That landing page appears to have gone live just days before Qualcomm announcing the 850 in Taiwan, and promising new hardware from partners-including Samsung-by the end of the year. Now, if I’m one of these vendors who threw support behind Windows on Snapdragon early, only to have Qualcomm Osborne my product before I’ve even started shipping it, I’m not a happy camper.

Might as Well Wait
As a frequent business traveler, the Windows on Snapdragon concept is very appealing to me. I realize that performance won’t come close to what even lower-end X86 processors from Intel and AMD offer, but I’m willing to make that trade for the benefits. As a result, I expect that for the first few years these types of PCs will be better as companion/travel devices rather than outright replacements for a traditional PC. In my case, I could see one competing for space in my bag with the LTE-enabled iPad Pro I carry today. Except when I carry the Pro, I still must carry my PC because there are some tasks I can’t do well on iOS.
Both the Lenovo and HP products are detachable tablets, whereas the ASUS is a convertible clamshell, which is the form factor I’m most eager to test. I was close to pulling the trigger on the ASUS through Amazon when the Qualcomm 850 news hit. Buying one now seems wasteful, with new, improved product inbound by the holidays. And that’s not the kind of news vendors want to hear.

Now many will say that this is the nature of technology, that something new is always coming next. And while that’s essentially a true statement, this move seems particularly egregious at a time when Qualcomm and Microsoft are trying to get skeptical PC vendors to support this new platform. Plus, we’re not talking about a speed bump to a well-established platform, this is a highly visible initiative with an awful lot of skeptics within the industry. Qualcomm might have decided that the poor initial reviews warranted a fast follow up; one hopes their existing partners were in on that decision.
Bottom line: I continue to find the prospects of Windows on Snapdragon interesting, and I expect the new products based on the 850 chip will perform noticeably better than the ones running on the 835. But if Qualcomm and Microsoft expect their partners to continue to support them in this endeavor, they’ve got to do a better job of supporting them in return.

Intel and AMD both dive into many-core CPU race

It seems not long ago that 2- and 4-core processors were at a seemingly unmovable status in the consumer CPU market. Both Intel and AMD had become satisfied with four cores being the pinnacle of our computing environments, at least when it came to mainstream PCs. And in the notebook space, that line was weighted lower, with the majority of thin and light machines shipping from OEMs with dual-core configurations, leaving only the flagship gaming devices with H-series quad-core options.

Intel first launched 6-core processors in its HEDT (high end desktop) line back in 2010, when it came up with the idea to migrate its Xeon workstation product to a high-end, high-margin enthusiast market. But core count increases were slow to be adopted, both due to software limitations and because the competition from AMD was minimal, at best.

But when AMD launched Ryzen last year, it started a war that continues to this day. By releasing an 8-core, 16-thread processor at mainstream prices, well under where Intel had placed its HEDT line, AMD was able to accomplish something that we had predicted would start years earlier: a core count race.

Obviously AMD didn’t create an 8-core and price it aggressively against Intel’s options out of the goodness of its heart. AMD knew that it would fall behind the Intel CPU lineup when it came to many single threaded, single core tasks like gaming and productivity. To differentiate and to be able to claim performance benefits in other, more content creation heavy tasks, AMD was willing to spend additional silicon. It provided an 8-core design priced against Intel’s 4-core CPUs.

The response from Intel was slower than many would have liked, but respond it did. It launched 6-core mainstream Coffee Lake processors that closed the gap but required new motherboards and appeared to put Intel out of its expected cadence of release schedules.

Then AMD brought out Threadripper, a competitor that it had never had previously to go against the Intel X-series platforms. It doubled core count to 16 with 32-threads available! As a result, Intel moved forward its schedule for Sky Lake-X and released parts up to 18-cores, though at very high prices by comparison.

Internally, Intel executives were livid that AMD had beat them to the punch and had been able to quickly release a 16-core offering to steal mindshare in a market that it had created and lead throughout its existence.

And thus, the current many-core CPU races began.

At Computex this week, both Intel and AMD are beating this drum. The many-core race is showing all its glory, and all of its problems.

Intel’s press conference was first and it had heard rumblings that AMD might be planning a reveal of its 2nd generation Threadripper processors with higher core counts. So it devised an impressive demonstration of a 28-core processor running at an unheard of 5 GHz on all cores – it’s hard to understate how impressive that amount of performance is. It produced a benchmark score in a common rendering test that was 2.2x faster than anything we had seen previously in a single socket, stock configuration.

This demo used a previously unutilized socket on a consumer platform, LGA3647, built for the current generation of Xeon Scalable processor. This chip also is a single, monolithic die, which does present some architectural benefits over AMD multi-chip designs if you can get past the manufacturing difficulties.

However, there has been a lot of fallout from this demo. Rather than anything resembling a standard consumer cooling configuration, Intel used a water chiller running at 1 HP (horsepower), utilizing A/C refrigerant and insulated tubing to get the CPU down to 4 degrees Celsius. This was nothing like a consumer product demo, and was more of a technology and capability demo. We will not see a product at these performance levels available to buy this year, and that knowledge has put some media, initially impressed by the demo, in a foul mood.

The AMD press conference was quite different. AMD SVP Jim Anderson showed a 32-core Threadripper processor using the same socket as the previous generation solutions. AMD is doubling the core count for its high-end consumer product line again in just a single year. This brings Threadripper up to the same core and thread count as its EPYC server CPU family.

AMD’s demo didn’t focus on specific performance numbers though it did compare a 24-core version of Threadripper to an 18-core version of Intel’s currently shipping HEDT family. AMD went out of its way to mention that both the 24-core and 32-core demos were running on air-cooled systems, not requiring any exotic cooling solutions.

It is likely AMD was planning to show specific benchmark numbers at its event, but because Intel had gone the “insane” route and put forward some unfathomably impressive scores, AMD decided to back off. Even though media and analysts that pay attention to the circumstances around these demos would understand the inaccuracy of comparison, it would have happened, and AMD would have lost.

As it stands, AMD was showing us what we will have access to later in Q3 of 2018 while Intel was showing us something we may never get to utilize.

The takeaway from both events and product demos is that the many-core future is here, even if the competitors took very different approaches to showcase it.

There are legitimate questions to the usefulness of this many-core race, as the software that can utilize this many threads on a PC is expanding slowly, but creating powerful hardware that offers flexibility to the developer is always a positive move. We can’t build the future if we don’t have the hardware to do it.

Apple No Longer Tells Users What Is Best For Them

At the end of the keynote at Apple’s Developer Conference on Monday, there were two areas where I thought Apple clearly decided it was not up to them to tell their users what they should and should not do: Siri Shortcuts and Screen Time.

Over the years, Apple has been criticized for deciding what was best for their users: color scheme on your Mac, U2 album in your library and slowing down your old iPhone to preserve battery. In all these cases, users did not like that a decision was made for them so how could they appreciate Apple telling them how best to take advantage of Siri, manage their time and parent their children?

Apple thought it was more useful to provide tools to users so they could decide how to do all those things better. Such a change might come from a shift in company philosophy. I do think, however, it is more likely to have come from the realization that Apple’s users are today as diverse as they have ever been. For Apple to find a middle ground between my mom, my daughter, and I is no easy task.

Siri Shortcuts Aimed at Pros to benefit the Masses

Apple has been trying to figure out how to talk about AI, ML, and Siri over the past year or so. Siri used to be voice, and other “smartness” that was happening on the iPhone was not necessarily called out. With the introduction of the A10 Fusion and A11 Bionic, Apple started to be more explicit in calling out AI and ML enabled capabilities. On Monday, however, rather than talk about AI, Apple was focused on positioning Siri as an assistant that helps you even when you do not talk to her, just like a human assistant would.

Digital assistant adoption is still in its infancy, and so is understanding and embracing of AI. Assistants are also suffering from users having to learn how to communicate with them. While some are more flexible than others, we are all trying to determine the exact way to ask them to do something for us and let’s face it, we are still far off from natural language.

The introduction of Siri Shortcuts seems to try and bypass these issues. Siri Shortcuts are a way for users to put together a phrase to either do one task like “find my keys” or a chain of actions like “morning routine” which set an alarm, checks the traffic and reminds me to order coffee. Siri will proactively also suggest Shortcuts based on your behavior. Behavior that is very different across the large user base Apple has today. Siri Shortcuts put the “burden,” for lack of a better word, of the set up on the user which is indeed not for everyone. I would expect. However, heavily engaged users will spend the time in setting them up, and as they see the return, they will do more. In a way, I think of these users as being similar to those who spent time fine-tuning their Apple Watch to become a complement to their iPhone rather than a replica of it.

Apple will learn from this early adopters and could feed data into ML models to create the most popular shortcuts for a broader set of users. The whole “Siri is behind” rhetoric is, after all, impacting more the engaged users than those who are interested in using Siri to set up a timer.

Screen Time Empowers You through Data

Digital health has raised to the attention of many over the past few months and companies are starting to respond. Apple, similarly to Google, is providing tools to raise awareness of what we all do with devices. While a lot of the attention has been on the well-being of kids, adults too could benefit from a little less screen time, and I sure know I could. Apple took a two-pronged approach. On the one hand it has made Do Not Disturb more efficient and broader, and on the other, they added Screen Time which, similarly to Google Dashboard, gives you a lot of information about how you use your apps. Siri also steps in helping you managing notifications which are a big part of what attracts you to look at your phone in the first place. The way you engage with those notifications will provide Siri with a clue on how important those are and will help suggest how best to set them up.

We live however in a free-will world, so Apple is not shutting things down for you. Users are in control and they should self-manage. I am a little skeptical about adults really making changes for the better, but maybe I am just projecting my own fear about me being able to change.

Where I do think there is a lot of potential is in Screen Time and kids. I always maintained that, as a parent, it is my responsibility to manage my child’s screen time, but I welcome any vendor to give me tools to help me do that. What I like about Apple’s Screen Time is that I can teach my daughter to be responsible about device time like she is responsible for others things in her analog life: feeding the bearded dragon, keeping track of her belongings at school and cleaning up her toys. I want my child to look at the Screen Time report and responsibly learn to self-manage. I want her to understand that it is not just about time with the device, it is about how you use that time. There is a difference between reading books, writing your journal, or drawing on your iPad and spending hours on Snapchat or YouTube. I do not expect her to get there straight away, but I think that having more self-awareness will undoubtedly help.

 

Overall I felt Apple focused on practical improvements to the experience users have today. It was not all sexy, truth be told most of it was not, but this does not mean it will not all help grow engagement and loyalty.

 

Siri Shortcuts Highlights Evolution of Voice-Based Interfaces

To my mind, the most intriguing announcements from this year’s Apple Worldwide Developer Conference (WWDC) was the introduction of Siri Shortcuts. Available across iOS devices with iOS12 and Apple Watches with WatchOS 5, Siri Shortcuts essentially adds a new type of voice-based user interface to Apple devices.

It works by building macro-like shortcuts for basic functions across a wide variety of applications and then gets them to execute by simply saying the name of your custom-labelled function to Siri. Critically, they can be used not just with Apple apps and iPhone or iPad settings, but across applications from other vendors as well.

Early on, most digital assistant platforms, such as Siri, Amazon’s Alexa, and the Google Assistant, focused on big picture issues like answering web-based queries, scheduling meetings, getting updates on quick data nuggets like traffic, weather, sports scores, etc. Most assistant platforms, however, didn’t really make your smart devices seem “smarter” or, for that matter, make them any easier to use.

With the introduction of Samsung’s Bixby, we saw the first real effort to make a device easier to use through a voice-based interaction model. Bixby’s adoption (and impact) has been limited, but arguably that’s primarily because of the execution of the concept, not because of any fundamental flaw in the idea. In fact, the idea behind a voice-based interface is a solid one, and that’s exactly what Apple is trying to do with Siri Shortcuts.

At first glance, it may seem that there’s little difference between a voice-based UI and traditional assistant, but there really is. First, at a conceptual level, voice-based interfaces are more basic than an assistant. While assistants need to do much of the effort on their own, a voice-based UI simply acts as a trigger to start actions or to allow more easy discovery or usage of features that often get buried under the increasing complexity of today’s software platforms and applications. It’s a well-known fact that most people use less than 10% of the capabilities of their tech products. Much of that limit is because people don’t know where to find certain features or how to use them. Voice-based interfaces can solve that problem by allowing people to simply say what they want the device to do and have it respond appropriately.

Given the challenges that many people have had with the accuracy of Siri’s recognition, this more simplistic approach is actually a good fit for Apple. Essentially, you’ll be able to do a lot of cool “smart” things with a much smaller vocabulary, which improves the likelihood of positive outcomes.

Another potentially interesting development is the possibility of its use with multiple digital assistants for different purposes. While I highly doubt that Apple will walk away from the ongoing digital assistant battle, they might realize that there could be a time and a place for, say, using Cortana to organize work-related activities, using Google Assistant for general data queries and using Siri for a variety of phone-specific functions—at least in the near term. Of course, a lot questions would need to be answered and API’s opened up before that could occur, but it’s certainly an intriguing possibility. Don’t forget, as well, that Apple has already created a connection between IBM’s Watson voice assistant and iOS, so the idea isn’t as crazy as it may first sound.

Even within the realm of a voice UI, it makes sense to add some AI-type functions. In fact, Apple’s approach to doing on-device machine learning to help maintain data privacy makes perfect sense, with a function/application that lets you use the specific apps installed on your device and provides suggestions based on the contacts and/or other personalized data stored in your phone. This is where the line between assistant and voice UI admittedly starts to blur, but the Apple offering still makes for a more straightforward type of interaction model that its millions of users will likely find to be very useful.

As interesting as the IFTTT (If This Then That)-like macro workflows that Siri Shortcuts can bring to more advanced users, however, I am a bit concerned that mainstream users could be a bit confused and overwhelmed by the capabilities that Shortcuts offers. Yes, you can achieve a lot, but even from the brief demo onstage, it’s clear that you also have to do a lot to make it work well. By the time it’s officially released as part of iOS12 this fall (as a free upgrade, BTW), I’m hoping Apple will create a whole series of predefined Siri Shortcuts that regular users can quickly access or easily customize.

The world of voice-based interactions continues to evolve, and I expect to see a number of advancements in both full-fledged assistant models, voice-based UIs, and combinations of the two. Long-term, I believe Siri Shortcuts has the opportunity to make the biggest impact on how iOS users interact with and leverage their devices of anything announced with iOS12, and I’m really looking forward to seeing how it evolves.

Client Hardware and Business Transformation

Last month I had the privilege of attending Dell Technology World in Las Vegas where the overriding theme was Business Transformation. This term is being used a lot these days to explain the overall shift from a PC Centric IT world to one where the Cloud sits at the center of an IT universe, and the client can be anything from a PC to a tablet, smartphone, and even IOT connections. It also speaks to the integration of essential tools that provide high-level security, collaboration and many other elements needed for IT to deliver a more seamless way for individuals to work more effectively within their organizations to be more productive.

There is no question that we are moving to a brave new world where anyone who works within an IT organization, whether it is a big one or a small one, is demanding that the tools they use as clients are the ones they are most comfortable with whether it be one that supports Windows, Mac OS or IOS, or Android or Chrome.

Over the years I have worked on well over 100 IT integration projects as well as served as the co-chair of the largest CRM conference in the US. While I understand the overall enterprise space, my specific role in these projects was mostly focused on the client area, and I served as the advocate for the actual user.

In the past, I evaluated hundreds of laptops and dozens of smartphones that were needed in these IT programs and in many cases laptops that were under consideration for various projects. In these projects, I would put myself in the place of the intended user, and looking at the goal and scope of the project, would make recommendations for what type of client would be best for various individuals to meet the needs of both the user and the IT director. I have helped influence buy decisions for up to 50,000 laptops in multiple IT projects over the years and continue to make these kinds of recommendations on all types of enterprise projects today.

With that in mind, I have been thinking a lot about the current state of the workstyles of what has become a more mobile workforce and the kind of tools they need to be more effective as part of any business transformation. More specifically, I have been looking closer at my own needs in client-based technology for me to be more effective in my job.

In this process, I have discovered that my workflow is much like the average knowledge worker today. Today’s workers are very mobile and use things like laptops, tablets, and smartphones as part of their daily activities. For all of us, the most important device is the one that is needed for the specific task we are doing at any given time. Knowledge workers sometimes work at their desk, and other times they are involved in conference room meetings or will take the laptop, tablet or smartphone with them to lunch or some other off-site venue.

However, it turns out that in most cases the laptop is the real workhorse of the knowledge worker and in my case, two essential additional technologies have dramatically impacted my productivity. These come in the form of docking stations or connectivity to various I/O inputs and large monitors. Most laptops screens are in the 12″ to 15″ range, and when working at a desktop for hours at a time, a large monitor has become an even more important tool that enhances my workflow and overall productivity.

Although larger monitors from 19″ inches to 29″ give users more screen space to work with, I found that using a 34″ widescreen monitor is the most useful new tool that has enhanced my productivity. In my case, I use the Dell UlatraSharp 34 inch, curved Monitor.

While I have used large monitors connected to my laptop for two decades, I was surprised how a 34″ curved monitor truly impacted the way I work. Because I have so much more screen real estate to work with, I can put three different applications on the screen to work with at any given time. In my case, the left third has my email; the center has the application I am working on at any given time, and the right third has a Web browser that gives me constant and immediate access to info I may need when writing, researching, keep up with news, etc.

I cannot emphasize enough how something as simple as a large widescreen monitor has impacted the way I worked and enhanced my overall productivity. I don’t say this lightly, but it has changed the way I worked and made working at my desk a pleasure.

Ironically, when most prominent tech companies talk about business transformation and especially the client area, they mostly focus on the role the laptop, tablet, and smartphone plays. But I would argue that in every case where a knowledge worker also spends serious time at their desk, they also need to help their customers understand that docking stations and large monitors can also be an essential tool in the business transformation process.

As an advocate for users in IT projects, I have now been suggesting that adding docking stations and larger monitors to the mix need to be considered and I feel that they add much to the users who especially spend a lot of time at a desk using their laptops.

What is ‘Forecastable’ About 5G?

A frequently asked question of industry analysts is “how big is the 5G market going to be?”. The real answer is that although it might be possible to forecast the size of certain aspects of 5G, it is nearly impossible to forecast the 5G market as an aggregate entity. Instead, I’d argue that there are several discrete, and not very related, elements of what might constitute a 5G forecast. Hopefully the thought process below will help those who want to gauge the progress and dimension of this exciting new phase of wireless and broadband.

First off, over the next couple of years, there will not be a distinct 5G service, in the conventional way we would think about it. It will be a few “islands of 5G” in a sea of 4G LTE – where a relatively limited subset of cells in a certain market will be equipped with 5G radios. It will look more like roaming onto a 5G hotspot (like Wi-Fi) than a seamless service – even from those who say they’ve deployed ‘mobile 5G’. Some operators might even obfuscate real 5G with services that might look like 5G, or elements of the LTE roadmap that boast 5G-esque characteristics or performance. Full throttled gigabit LTE service will feel as good as early 5G, as we’ve seen from some of the marketing, and even branding, such as AT&T’s 5G Evolution.

The equipment deployment aspect of 5G is going to be especially tricky to count. Many operators are adding small cells and densifying their networks in order support more capacity for LTE, and also to be ready for the higher bands that will be used for 5G, such as 3.5 GHz or mmWave. While 5G might be a driver for these deployments, or a beneficiary of them down the line, they still fall in the 4G bucket.

So, one way to look at this might be measure the percentage of cell sites in a given area that have been equipped with or upgraded to 5G.  Keep in mind that many of the sites that have been shipped over the past 2-3 years are 5G ready, meaning they can be upgraded to 5G with a software update.

Second, some of the 5G services will be based on specific business cases, but will be counted outside of some of the metrics we typically use to count things in the mobile industry. Case in point is Verizon’s soon to be launched fixed wireless access (FWA) service. Here, 5G might be the access mechanism, but really this sits in Verizon’s fixed broadband bucket, as in Fios or DSL. No Verizon Wireless customer will be able to buy this service unless they want to use it for broadband in their home.

This gets even tougher with other markets that will drive the 5G business case. The IoT aspect is a good example. One of the important capabilities of 5G is that it can handle millions of connected devices simultaneously. But it could well be that these are many millions of low power devices consuming very little data. Not the ‘faster, higher, stronger’ flavor of what we normally consider in a wireless network upgrade. Or, take connected cars. Vehicles using V2X will employ 5G for some aspect of a vehicle’s overall communications, but this will be a challenge to actually count.

The most countable – and hence most forecastable – aspects of 5G will come from the equipment and device sides. It will be easiest in greenfield implementations, such as in mmWave, where all the radios are 5G by default. It is possible to forecast and count 5G NR, even though they might not be the same as a 4G radio in the physical sense of the word. It will also be possible to forecast 5G devices that can talk to a 5G radio. Initially it will be 5G bricks and dongles, but in 2019, we’ll see some early 5G phones. There will also be a much larger number and variety of devices that are 5G enabled, meaning they contain a chipset that can talk to a 5G radio. Another new ‘category’ will be the CPE for fixed wireless, both at the site as well as the little box that sits outside the window.

There is also a slew of technologies and products that will deliver some of the capabilities that will make 5G distinct. This includes advanced antennas, the techniques behind advanced beam forming, and network slicing. These are all parts of the 5G ‘market’, but difficult to isolate, as in “the market for network slicing is X”. It is more that the capability for network slicing is one of the key features of 5G, and might help open up new frontiers in the enterprise mobile market.

Those who have read this far might argue that much of my argument could well apply to what we saw with 4G LTE. But I’d argue that this is different, and more complex. 5G will be rolled out on a more staggered basis, on timetables that vary significantly from one country (or city) to the next, and only sometimes on purpose-built spectrum. It will be spread across a much greater variety of use cases/application types and on a much broader suite of devices, not all of which might use 5G in the conventional sense. Actually, a minority of 5G devices will be phones as we know them today.

5G is going to be something very significant over the next 10 years, but it will be some time before we can get our arms around what actually constitutes the “5G Market”.

New XR1 chip from Qualcomm starts path to dedicated VR/AR processing

During the Augmented World Expo in Santa Clara this week, Qualcomm announced a new, dedicated platform for future XR (extended reality) devices, the Snapdragon XR1. Targeting future standalone AR and VR headset and glasses designs, the XR1 marks the beginning of the company’s dedicated platforms aimed squarely at the segment.

The Snapdragon XR1 will be the first of a family of XR dedicated chips and platforms to enable a wide range of performance and pricing classes across an array of partner devices. Qualcomm is only releasing information on the XR1 today, but we can assume that future iterations will be incoming to create a tiered collection of processors to target mainstream to flagship level hardware.

Though Qualcomm today uses the existing Snapdragon mobile platforms to build its AR/VR reference designs, the expected growth of this field into a 186-million-unit base by 2023 is pushing the company to be more direct and more specialized in its product development.

The Snapdragon XR1 will address what Qualcomm calls the “high quality” implementations of VR. Placed well above the cheap or free “cardboard” integrations of smartphone-enabled designs, the XR1 exists to create solutions that are at or above the current level of standalone VR hardware powered by current Qualcomm chips. Today’s landscape of devices features products like the Oculus Go powered by the Snapdragon 821 mobile platform and the Lenovo Mirage Solo using the Snapdragon 835. Both are excellent examples of VR implementations, but the company sees a long-term benefit from removing the association of “mobile processor” from its flagship offerings for XR.

Instead, the value of creating a customized, high-value brand for Qualcomm to target dedicated VR headsets gives the company flexibility in pricing and feature set, without pigeon-holing the product team into predefined directions.

The specifics on the new Snapdragon XR1 are a bit of a mystery, but it includes most of the components we are used to seeing in mobile designs. That means a Kryo CPU, an Adreno GPU, a Hexagon DSP, audio processing, security features, image signal processor, and Wi-Fi. Missing from the mix is an LTE-capable modem, something that seems at least slightly counter to the company’s overall message of always connected devices.

Detailed performance metrics are missing for now, as Qualcomm allows its partners to design products around the XR1. With varying thermal constraints and battery life requirements, I think we’ll see some design-to-design differences between hardware. At that point we will need Qualcomm to divulge more information about the inner workings of the Snapdragon XR1.

I expect we’ll find the XR1 to perform around the level of the Snapdragon 835 SoC. This is interesting as the company has already announced the Snapdragon 845 as part of its latest AR/VR standalone reference headset design. The XR1 is targeting mainstream pricing in the world of VR, think the $200-$400 range, leaving the Snapdragon 845 as the current leader. If and when we see an XR2 model announced, I expect it will exceed the performance of the SD 845 and the family of XR chips will expand accordingly.

The Qualcomm Snapdragon XR1 does have some impressive capabilities of its own, though keep in mind that all of this is encompassed in the 845 designs as well. Features like 4K HDR display output, 3D spatial audio with aptX and Aqstic capability, voice UI and noise filtering, and even AI processing courtesy of the CPU/GPU/DSP combo, all feature prominently in the XR1. The chip will support both 3DoF (degree of freedom) movement and controllers as well as 6DoF, though the wider range of available movement will be associated with higher tier, higher priced devices.

The first crop of announced customers includes HTC Vive and several others. Oculus isn’t on the list, but I think that’s because the Oculus Go was just released and utilizes a lower level of processor technology than XR1 will provide. Qualcomm has solidified its leadership position in the world of standalone AR/VR headsets, with almost no direct competition on the horizon. As the market opportunity expands, so will the potential for Qualcomm’s growth in it, but also the likelihood we will see other companies dip their toes into the mix.

The Risk of Giving up on AI

A couple of weeks ago, my six-year-old Samsung TV died and I decided to invest in a 2018 model with Bixby. On my Galaxy Phone, I have found Samsung’s Bixby to be a good voice UI but not a good all-around assistant.

Setting up the TV was a walk in the park, once I signed into the SmartThings app on my phone the app saw the TV, set up my Wi-Fi and walked me through settings. In a few clicks my Xfinity X1, my Xbox and my Apple TV were all connected and set up. Bixby would happily change input, pull up an app and get me the channel I wanted. Right now the experience is like Siri on Apple TV, you need to press the button on the remote, but this could change if Samsung starts bringing to market soundbars that are Bixby enabled, for instance.

To me, that is a glimpse of the smartness Bixby could deliver going forward. It is a subset of what an Alexa or Google Assistant would do, but there is value in it for a user. Think about that ecosystem of devices growing from TV plus phone to the fridge or your cleaning robot and all the smart devices in your home.

The Two Sides of the AI Battle

I am sure I am stating the obvious when I say that users will expect their future devices and experiences from services and apps to get smarter. Intelligence will have to permeate everything we touch where context and understanding of us users will deliver a much more personal, customized and therefore intelligent experience.

This more intelligent experience will not have to be limited to exchanges with our assistant even if the assistant might be the best showcase of such intelligence.

There are two sides to this new battle: ecosystem owners and hardware vendors.  For the ecosystem owners like Apple, Amazon, Google, it is quite straightforward. They are building AI and Machine Learning (ML) capabilities and a digital assistant.  They can add those to their own devices as a hardware differentiator or they can build an assistant that they make available for integration in as many devices as possible so that their services can get smarter and be the differentiator for their ecosystem.

On the hardware maker side of the equation, things are a little more complicated. We have already seen hardware vendors adopt different approaches when it comes to integrating AI into their products. Some vendors are embracing what is available, some are embracing Alexa or Google Assistant but trying to differentiate on top, and others are building something they can control.

Watching how hardware vendors approach AI says a lot about their aspirations to build an ecosystem for the future. When it comes to Samsung, it is clear that they do not want to be cornered into “just” making hardware and they want to control what AI will enable on top of the current platforms.

Why Embracing Google is not an Option for Samsung

As Samsung’s managers started to talk about the next version of Bixby last week, I have read some commentary suggesting that Samsung should just give up and embrace Google Assistant throughout their devices. I find such a suggestion hard to stand by in a world where Apple is growing its services layer, Sony is abandoning hardware to focus on content and services and Qualcomm’s CEO is warning that 5G will open the floodgates to Chinese players.

I know this is not a fashionable take on things, but I am used to that. I was one of the few people who always said that Nokia was right in not embracing Android as doing so would have killed Nokia, just more slowly and painfully.

The situation Samsung is in today is actually not very different from a market dynamic perspective than what Nokia faced back in 2014.

Nokia was the market leader when management understood the next phase of mobile phones was about services and apps. That led Nokia on an acquisition-spree in all areas that they thought would be critical going forward:

  • Maps: gate5 and Navteq
  • Music: Loudeye
  • Advertising: Enpocket
  • OS: Symbian

Nokia had the right vision, but as it was often the case lacked execution. While they were busy consolidating all those assets into OVI, they took their eyes off the hardware, and Apple and Android blindsided them. Embracing Android at that point would have meant wasting all those investments as well as being pushed to differentiate only by hardware and price. This eventually would have put Nokia where Motorola, HTC, LG found themselves: in a no margin business with the Chinese vendors making advances and grabbing share.

If Samsung embraced Google Assistant today, they would also let some of their acquired assets go to waste from SmartThings to Viv, to Harman.  And while throwing more money to a problem will results at times in losing even bigger cash, as Samsung figured out when they pulled the plug on Milk Music, I do think there is scope for Samsung to capitalize on Bixby.

Intelligence means different things to different people. There are Samsung’s users that are happy to use Google Assistant and embrace Google’s services they prefer Samsung’s hardware. Those users are likely less loyal and less embedded in the Samsung ecosystem. But some users who have multiple Samsung’s devices will benefit from an assistant that could make that cross-device experience better for them.

There is a lot of information that Samsung could gather about me in the home through Bixby. Valuable information that might allow Samsung to build routines, control my home and make Bixby the trusted assistant when it comes to home automation and entertainment.

If you share my belief that users will likely have more than one assistant they turn to for their every need, Bixby could find its rightful role in the mix. That said if Samsung is interested in fully compete with Alexa and Google Assistant they better step up their game starting with articulating a clear vision of what their AI and assistant strategy is.

Virtual Travel and Exploration Apps Are Key to Mainstream VR Adoption

As we pass Memorial Day and enter the unofficial start of summer and vacation time, it’s natural to think about the potential summer applications of technology. Unfortunately, the most obvious one is that, thanks to smartphones, we’re never really disconnected from our work. That is, unless you have the guts to actually turn your phone off for an extended period of time (and more power to you if you do!)—or are intentionally visiting a place with limited connectivity.

However, there are plenty of very positive applications for summertime relaxation and enjoyment with technology, as well. Relaxing on the beach with your favorite streaming music service, finding the best restaurants to visit in your vacation locale, or planning the perfect summer road trip (and making sure you accurately get there), are all great examples of applications and capabilities that people literally all over the world now take for granted thanks to our tech devices and services.

Looking ahead, I expect we’ll see even more dramatic applications for technology and travel. One of the more intriguing possibilities is the concept of virtual travel and exploration through dedicated virtual reality headsets and applications. In fact, in a recent TECHnalysis Research study of 1,000 US consumers who already own VR and/or AR headsets, the top applications people were already doing on their devices were intensive and casual gaming (to no one’s surprise), but they were only a few percentage points above virtual travel and exploration. This was particularly surprising because the respondents to the survey all identified themselves as gamers. In addition, the top two applications that survey respondents wanted to use but weren’t currently using were simulations (such as riding a virtual roller coaster, etc.), followed closely by virtual travel and exploration.

In other words, even among an arguably gaming-focused crowd, there was tremendous interest in applications and experiences that could bring them to new parts of the world. It’s armchair travel for the 21st century.

The top reason why people said they weren’t currently using those apps is that they simply didn’t know enough about them. Interestingly, the lack of overall awareness is a problem for the entire AR and VR industry. Most people simply aren’t aware of or haven’t had a good opportunity to experience VR or AR. This lack of education or awareness even extends to people who’ve made the effort to purchase and use a headset.

To my mind, this screams of an enormous opportunity for headset makers, AR and VR platform providers, and/or application and content developers to take a more intensive look at travel and experience-focused applications, not just games. There’s clearly a demand among existing gaming-focused device owners, but virtual travel is also the kind of application that could open up VR headset sales to a much broader, mainstream audience.

Thankfully, there are a number of efforts from a variety of vendors to start to address these issues. For personally created content, vendors like Lenovo and Samsung have started to build and sell cameras that are specifically optimized to create 180˚ and 360˚ movies that can be viewed and experienced on VR headsets (as well as on smartphones, PCs, and TVs). Lenovo’s new $299 Mirage Camera follows the Google VR180 format and lets you record and even stream widescreen movies to people wearing their Daydream-compatible Mirage Solo headset. Samsung’s popular Gear 360 camera can do either 360˚, or by flipping a switch 180˚ video as well, and can also stream live to Galaxy S8 and S9-driven Gear VR headsets to provide a real-time VR experience.

Travel VR content creation tools are also starting to grow. At this year’s I/O, Google announced a new tool for students called Tour Creator, which builds on the company’s previous Expedition tool and allows kids (and adults) to create their own virtual travel experiences. Unfortunately, there aren’t yet a lot of truly “killer” virtual travel commercial apps—particularly ones that work across all the different VR platforms. In fact, this is one of the other major challenges facing wider VR consumer adoption. There simply aren’t enough high-quality VR travel applications, and the ones that are out there aren’t very well known.

Of course, getting to the higher-resolution visual experience that people are demanding is another big challenge. While virtual travel is certainly exciting, the fact that it’s supposed to be taking people to “real” places means that expectations for visual quality are going to be very high—much higher than for a simulated or animated world. Support for higher-resolution screens and dedicated VR/AR chipsets—such as the widely rumored Qualcomm Snapdragon XR1 that many sites have speculated will be released at this week’s Augmented World Expo show—are certainly going to be steps in the right direction, but it’s clear that we are still in the early days of VR and AR technology.

Virtual travel will never replace the real thing, but it is a great cost-effective alternative and for many—including the elderly and the less mobile—it may be the only way they can experience new places. Plus, for applications that offer the possibility to explore new worlds—from the microscopic to the extraplanetary—it’s the only solution for all of us.

While it’s easy to get caught up in the excitement around gaming in VR, given that the total worldwide travel industry is over 100x the revenues of the worldwide gaming industry (roughly $8 trillion vs. $80 billion per year), the opportunity for bringing high-quality travel experiences to VR is too big to ignore.

Podcast: Facebook, GDPR, Alexa Recording, Samsung Bixby, Intel AI

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Mark Zuckerberg’s European testimony, analyzing the impact of the GDPR data privacy rules, debating the meaning of the Alexa private recording issue, chatting about Samsung’s Bixby, and talking about Intel’s AI Developer Conference.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Despite PC Market Consolidation, Buyers Still Have Plenty of Options

The traditional PC market’s long-term decline gets plenty of press, but one of the other less-talked-about trends occurring inside the market’s slide is the massive share consolidation among a handful of players. A typical side effect of this type of market transformation is fewer options for buyers, as larger brands swallow up smaller ones or force them out of business. While this has certainly occurred, we’ve seen another interesting phenomenon appear to help offset this: New players entering into this mature market.

Top-Five Market Consolidation
The traditional PC market consists of desktops, notebooks, and workstations. Back in 2000, the market shipped 139 million units worldwide, and the top five vendors constituted less than 40% of the total market. That top five included Compaq, Dell, HP, IBM, and NEC. Fast forward to 2010, near the height of the PC market, and units have grown to about 358 million units worldwide for the year. The top five vendors are HP, Dell, Acer, Lenovo, and Toshiba, and they represented 57% of the market. Skip ahead to 2017, and the worldwide market has declined to about 260 million units. The top five now represent about 74% of the total market and is made up of HP, Lenovo, Dell, Apple, and Acer.

Market consolidation in mature markets such as Japan, Western Europe, Canada, and the United States has been even more pronounced. In 2017 the top five vendors in Japan represented 77% of shipments; in Western Europe, it was 79%; in Canada, it was 83%, and in the U.S. it was 89%. Markets traditionally considered emerging, however, weren’t far behind. In the Asia Pacific (excluding Japan), the top five captured 69% of the market in 2017; in Latin American, it was 71%, and in Central and Eastern Europe plus the Middle East and Africa it was 76%.

Category Concentration
If we drill down into the individual device categories at the worldwide level, we can see that desktops remain the area of the market with the least amount of share concentration among the top five in a given year. In 2000 the top five represented 38% of the market; in 2010 it was 46%, and in 2017 it was 61%. Desktops continue to be where smaller players, including regional system integrators and value-added retailers, can often still compete with the larger players. In notebooks, the consolidation has been much more pronounced. In 2000 the top five represented 57% of the market; in 2010 it was 67%, and in 2017 it was 82%. Interestingly, in the workstation market-which grew from 900,000 units in 2000 to 4.4 million in 2017-the top five have always been dominant with a greater than 99% market share in each period.

Another trend inside each category that’s viewable at each year is the evolution of average selling prices. At a worldwide level, the average selling price of a notebook in 2000 was $2,176; in 2010 it declined to $739, and by 2017 it increased to $755. During those same time periods, the desktop went from $1,074 to $532, to $556. Workstations were the only ASP that continued to decline, dropping from $3,862 to $2,054 to $1,879. I’d argue consolidation itself has played a relatively minor role is the ASP increases in notebooks and desktops, as competition in the market remains fierce. The larger reason for these increases is that both companies and consumers now know that they plan to hold on to their PCs longer than they have in the past, and as a result, they’re buying up to get better quality and higher specifications.

New Entrants in the Market
All of this market consolidation might lead you to believe that today’s PC buyers have fewer choices than they did in the past. And this is true, to some extent. A consumer browsing the aisles at their local big box store or an IT buyer scanning their online options will undoubtedly notice that many of the vendors they purchased in the past are no longer available. But the interesting thing is that there are a handful of new players that have moved in, and many are putting out very good products.

There’s Google’s own Pixelbook, which demonstrates just how good the Chromebook platform can be. Microsoft continues to grow its product line, now offering notebooks and desktops in addition to its Surface detachable line, showcasing the best of Windows 10. And there is the mobile phone vendors such as Xiaomi and Huawei, each offering notebook products, with the latter in particular fielding a very good high-end product. It’s also notable that none of these vendors has targeted the high-volume, low margin area of the market. All are shipping primarily mid to high-end products in relatively low volumes.

As a result, none of these newer entrants have come close to cracking the top five in terms of shipments in the traditional PC market. But I’d argue that their presence has helped keep existing vendors motivated and has increased competition. As a result, I’d also say that the top five vendors are producing some of their best hardware in years.

As the traditional PC market decline slows and eventually stabilizes (the first quarter of 2018 was flat year over year), competition will intensify, and consolidation is likely to continue. It will be interesting to see how these newer vendors compete to grow their share, and how the old guard will fight to gobble up even more, utilizing their massive economies of scale. Regardless, the result will be a boon for buyers-especially those in the growing premium end of the market-who should continue to have plenty of good hardware options from which to choose.

Dell Cinema Proves PC Innovation Still Vital

The future of the consumer PC needs to evolve around more than just hardware specifications. What Intel or AMD processor powers the system, and whether an NVIDIA or Radeon graphics chip is included are important details to know, but OEMs like Dell, HP, Lenovo, and Acer need to focus more on the end-point consumer experiences that define the products they ultimately hand to customers. Discrete features and capabilities are good check-box items, but in a world of homogenous design and platform selection, finding a way to differentiate your solution in a worthwhile manner is paramount.

This is a trend I have witnessed at nearly every stage of this product vertical. In the DIY PC space motherboard companies faltered when key performance features were moved from their control (like the memory controller) and on to the processor die itself. This meant that the differences between motherboards was nearly zero in terms of what a user actually felt, and in even what raw benchmarks would show. The market has evolved to a features-first mentality, focusing on visual aesthetics and add-ons like RGB lighting as much, or more, than performance and overclocking.

For the PC space, the likes of Dell, HP, and Lenovo have been fighting this battle for a number of years. There are only so many Intel processors to pick from (though things are a bit more interesting with AMD back in the running) and only so many storage solutions, etc. When all of your competition has access to the same parts list, you must innovate in ways outside what many consider to be in the wheelhouse of PCs.

Dell has picked a path for its consumer product line that attempts to focus consumers on sets of technology that can improve their video watching experience. Called “Dell Cinema”, this initiative is a company-wide direction, crossing different product lines and segments. Dell builds nearly every type of PC you can imagine, many of which can and do benefit from the tech and marketing push of Dell Cinema, including notebooks, desktop PCs, all-in-ones, and even displays.

Dell Cinema is flexible as well, meaning it can be integrated with future product releases or even expanded into the commercial space, if warranted.

The idea behind this campaign, both of marketing and technology innovation, is to build better visuals, audio, and streaming hardware that benefits the consumer as they watch TV, movies, or other streaming content. Considering the incredible market and time spent on PCs and mobile devices for streaming media, Dell’s aggressive push and emphasis here is well placed.

Dell Cinema consists of three categories: CinemaColor, CinemaSound, and CinemaStream. None of these have specific definitions today. With the range of hardware the technology is being implemented on, from displays to desktops to high-end XPS notebooks, there will be a variety of implementations and quality levels.

CinemaColor focuses on the display for both notebooks and discrete monitors. Here, Dell wants to enhance color quality, provide monitors with better contrast ratios, with brighter whites and darker black tones. Though Dell won’t be able to build HDR-level displays into each product, the goal is to create screens that are “optimized for HDR content.” Some of the key tenets of the CinemaColor design are 4K screen support, thinner bezels (like the company’s Infinity Edge design), and support for Windows HD Color options.

For audio lovers, Dell CinemaSound improves audio on embedded speakers (notebooks, displays) as well as connected speakers through digital and analog outputs, including headphones. Through a combination of hardware selection and in-house software, Dell says it can provide users with a more dynamic sound stage with clearer highs, enhanced bass, and higher volume levels without distortion. The audio processing comes from Dell’s MaxxAudio Pro software suite that allows for equalization control, targeting entertainment experiences like movies and TV.

The most technically interesting might be CinemaStream. Using specific hardware selected with this in mind, along with a software and driver suite tuned for streaming, this technology optimizes the resources on the PC to deliver a consistently smooth streaming video experience. Intelligent software identifies when you are streaming video content and then prioritizes that on the network stack as well as in hardware utilization. This equates to less buffering and stuttering in content and potentially better resolution playback with more available bandwidth to the streaming application.

These three features combine for Dell Cinema. Though other OEMs and even component vendors have programs in place to tout capabilities like this for displays or sound, only Dell has succeeded in creating a singular, and easily communicated brand around them.

Though it was seemingly impossible in its first roll out, I would like to see Dell better communicate details of the technology behind Dell Cinema. Or better yet, let’s place specific lines in the sand for them: CinemaColor means any display (notebook or monitor) will have a certain color depth rating or above, or a certain HDR capability or above, etc. Same for audio and streaming – Dell should look to make these initiatives more concrete. This would provide more confidence to consumers that no longer have to worry about nebulous language.

While great in both theory and in first-wave implementations, something like Dell Cinema can’t be a one-trick-pony for the company. Dell needs to show iteration and plans for future improvement for it, not only for Dell Cinema to be taken seriously, but for any other similar program that might develop in the future. Trust is a critical component of marketing and technology initiatives like this one, and consumers are quick to shun any product or company that abandons them.

Esports must do Right by Female Athletes

Earlier this year, I was sitting at a Dell press conference at CES when Frank Azor, one of the co-founders of Alienware now responsible for the gaming and XPS business at Dell, announced the collaboration with teamliquid to build the first two Alienware training facilities for esports. I was not ashamed to admit on Twitter that I had no idea gaming had grown up so much as to become comparable to an Olympic sport.

I had witnessed the rise of game streaming through my own kid who spends as much time watching people play Minecraft than she does playing. But I had no idea that many gamers in the world train that way and earn a living. I am pretty sure she does not know either!

While esports has been around for some decades, it has really taken a global role over the past ten years and over the past couple it has started to reflect more and more traditional sports with significant investments, and broadcasting interest from channels such as ESPN. John Lasker, ESPN’s VP of programming, compared opening up to including esports to opening up to add poker, something that nobody questions today but that did not get covered without some initial skepticism.

There is still a lot of work to be done to change the mass market perception of what esports athletes look like and why one should think about them in terms of a sportsperson with unique capabilities rather than a kid sitting in front of a PC in a bedroom eating lousy food and drinking soda. Esports is becoming so big that it will be an official medal sport at the 2022 Asian Games in China. The Olympic Council of Asia (OCA) announced a partnership with Alisports, the sports arm of Chinese online retail giant Alibaba, to introduce esports as a demonstration sport at next year’s games in Indonesia, with full-fledged inclusion in the official sporting program at the Hangzhou Games in 2022. The OCA said the decision reflects “the rapid development and popularity of this new form of sports participation among the youth.”

The eSports Audience and Athletes

According to a recent GWI report, one in three esports fans are aged between 20 and 25. Overall esports fans are now representing around 15% of internet users, and 71% of them are male. This reflects quite well the pro-gamers crowd. Yet, gaming overall has not been so male-dominated for quite some time. Already in 2012, a study by the Entertainment Software Association showed that gamers were split 53% male and 47% female.

So why are we not seeing more female pro-gamers? The answer is pretty straightforward: culture and gatekeeping. Female gamers are often made to feel they do not belong that they do not have the skills. I am sure some of you will quickly run a list of heroines like Cortana, Lara Croft, Sonya Blade but as you continue and you pay attention to their outfits you quickly spot one of the problems: objectification.
As pro-gaming added streaming as a big part of the experience as well as a revenue opportunity, being a pro gamer got even harder for women as they are often harassed. To stay in the game, many female players would “hide” themselves by avoiding voice chat and cut themselves out of the streaming revenue opportunity which then in turns limits their exposure to growing their followers and showcasing their skills. It is a vicious circle that is hard to break.

A Big Business for Some

Newzoo is estimating Global esports revenues to reach $905.6 million in 2018, an increase of more than $250 million compared to 2017. North America will generate the most revenues, contributing 38% of the global. Sponsorship is the highest grossing eSports revenue stream worldwide, contributing $359.4 million in 2018 compared to $234.6 million in 2017.

This fantastic growth does not seem to benefit all, however. Esports like many traditional sports has a pay gap problem. Earlier in the year, the winners of China’s Lady Star League took home roughly $22,000, while, this year’s LoL spring split champs took home $100,000, second place, $50,000, third place, $30,000, and fourth place, $20,000.

In traditional sports, we have some great examples like Wimbledon where the organizers have offered equal price-money for over ten years now. The Australia Open followed suit. Looking at tennis as a whole, however, the gap still, and the same is true for soccer, golf, and cricket. For some, the issue is deeply rooted in the rules of the game that favor men. Think about advertising which is where much of the money is coming from. Female esports chases after the same sponsors and TV channels as male sports, but because of the male-biased demographic on those TV channels, they do not reach similar viewing figures to those of male sports. More recently sponsors have started to realize that they can reach a pretty good demographic of women for a relatively low price and that those women are more often than not decision makers on many household purchases.

This week, Epic Games announced they would inject $100 million into Fortnite esports competition for the 2018-2019 season. The money, according to a company blog, will fund prize pools for Fortnite competitors in a more inclusive way that focuses on the joy of playing and watching the game and will not be limited to the top players only.

While no details have been shared, I am really hoping that we will see some of the money go to support female-only tournaments with prices that match what we see in the male tournaments. Female-only tournaments will not only give access to money but, equally important; they will provide a safe space for female athletes to compete without feeling isolated. By all means, I do not expect Epic Games to be a silver bullet for all that is wrong with the lack of female empowerment in esports but wouldn’t it be good if the joy of playing they talked about could involve some effort in making female athletes more welcome?

I saw some people pointing to the fact that girls should be encouraged to game, they should be included. In a way, they are making it sound like esports has a pipeline issue as tech does. Yet, when I look at my daughter’s 4th-grade class, I see boys and girls gaming together and boys acknowledging the skills of their top player who happens to be a girl. The same can be said about soccer or basketball. So it seems to me that for once, we do not have a pipeline issue, not until the kids grow up and they are told they cannot play together!

The World of AI Is Still Taking Baby Steps

Given all the press surrounding it, it’s easy to be confused. After all, if you believe everything you read, you’d think we’re practically in an artificial intelligence (AI)-controlled world already, and it’s only a matter of time before the machines take over.

Except, well, a quick reality check will easily show that that perspective is far from the truth. To be sure, AI has had a profound impact on many different aspects of our lives—from smart personal assistants to semi-autonomous cars to chatbot-based customer service agents and much more—but the overall real-world influence of AI is still very modest.

Part of the confusion stems from a misunderstanding of AI. Thanks to a number of popular, influential science fiction movies, many people associate AI with a smart, broad-based intelligence that can enable something like the nasty, people-hating world of Skynet from Terminator movies. In reality, however, most AI applications of today and the near future are very practical—and, therefore, much less exciting.

Leveraging AI-based computer vision on a drone to notice a crack on an oil pipeline, for example, is a great real-world AI application, but it’s hardly the stuff of AI-inspired nightmares. Similarly, there are many other examples of very practical applications that can leverage the pattern recognition-based capabilities of AI, but do so in a real-world way that not only isn’t scary, but frankly, isn’t that far advanced beyond other types of analytics-based applications.

Even the impressive Google Duplex demos from their recent I/O event may not be quite as awe-inspiring as they first appeared. Amongst many other issues, it turns out Duplex was specifically trained to just make haircut appointments and dinner reservations—not doctor’s appointments, coordinating a night out with friends, or any of the multitude of other real-world scenarios that the voice assistant-driven phone calls that the Duplex demo implied were possible.

Most AI-based activities are still extraordinarily literal. So, if there’s an AI-based app that can recognize dogs in photos, for example, that’s all it can do. It can’t recognize other animal species, let alone distinct varieties, or serve as a general object detection and identification service. While it’s easy to presume that applications that can identify specific dog species offer similar intelligence across other objects, it’s simply not the case. We’re not dealing with a general intelligence when it comes to AI, but a very specific intelligence that’s highly dependent on the data that it’s been fed.

I point this out not to denigrate the incredible capabilities that AI has already delivered across a wide variety of applications, but simply to clarify that we can’t think about artificial intelligence in the same way that we do about human-type intelligence. AI-based advances are amazing, but they needn’t be feared as a near-term harbinger of crazy, terrible, scary things to come. While I’m certainly not going to deny the potential to create some very nasty outcomes from AI-based applications in a decade or two, in the near and medium-term future, they’re not only not likely, they’re not even technically possible.

Instead, what we should concentrate on in the near-term is the opportunity to apply the very focused capabilities of AI onto important (but not necessarily groundbreaking) real-world challenges. This means things like improving the efficiency or reducing the fault rate on manufacturing lines or providing more intelligent answers to our smart speaker queries. There are also more important potential outcomes, such as more accurately recognizing cancer in X-rays and CAT scans, or helping to provide an unbiased decision about whether or not to extend a loan to a potential banking customer.

Along the way, it’s also important to think about the tools that can help drive a faster, more efficient AI experience. For many organizations, that means a growing concentration on new types of compute architectures, such as GPUs, FPGAs, DSPs, and AI-specific chip implementations, all of which have been shown to offer advantages over traditional CPUs in certain types of AI training and inferencing-focused applications. At the same time, it’s critically important to look at tools that can offer easier, more intelligible access to these new environments, whether that be software languages like Nvidia’s CUDA platform for GPUs, National Instruments’ LabView tool for programming FPGAs, and other similar tools.

Ultimately, we will see AI-based applications deliver an incredible amount of new capability, the most important of which, in the near-term, will be to make smart devices actually “real-world” smart. Way too many people are frustrated by the lack of “intelligence” on many of their digital devices, and I expect to see many of the first key advances in AI to be focused on these basic applications. Eventually, we’ll see a wide range of very advanced capabilities as well, but in the short term, it’s important to remember that the phrase artificial intelligence actually implies much less than it first appears.

Why the Maker Movement is Critical to Our Future

This last weekend the granddaddy of Maker Faire’s was held at the San Mateo Event Center and close to 100,000 people went to this Faire to check out all types of maker projects. When Dale Dougherty, the founder of Make Magazine started his publication, the focus was really on STEM and tech-based ideas.

In the early days of the Magazine, you would find all types of projects for making your own PC’s, Robots, 3D printed designs, etc. It reminded me a bit of my childhood where we had erector sets, tinker toys, and Lincoln Logs, etc. that were educational and in its own way was trying to get kids interested in making things in hopes it would help guide them to future careers.

Over time, the Maker Movement has evolved well beyond just STEM projects and now includes just about any do-it-yourself maker project you could think of doing yourself. At the Faire this year I saw quilting demos, a bee-keeping booth, an area teaching you how to ferment foods, alongside stalls that had laser cutters, 3D printers, Wood lathes, robotic kits and a lot of other STEAM based items and ideas.

Going to a Maker Faire is fun and fascinating in many ways, but to me, the thing I love most is watching the excited faces of the boys and girls who attend this event. Seeing them going from booth to booth trying to get ideas of their own that could help them create their maker projects is rewarding in itself.

The Maker Movement comes at one of the most critical times in our history. When I was in Jr High and High School in the 1960’s, the world we were being prepared for had little to do with tech. My elective options were auto shop, drafting and metal shop and I even took a home economics class. These courses were designed to prepare us for blue collar jobs. Of course, these types of jobs still exist today but in the information age, the majority of jobs now and in the future are more and more will be focused on skills related to math, engineering, and s
science.

he Maker Movement and especially the Maker Faires that are now all over the world serve as an essential catalyst with a goal to help kids get interested in STEM and STEAM. They are designed to instill in them a real interest in pursuing careers in technology and the sciences as well as introduce them to the idea that anyone can be a “maker.”

At this year’s event in the Bay Area, the Maker Faire held a college and career day on Friday Morning before the Faire itself opened that afternoon. I had the privilege of moderating a panel about career journeys with five panelists telling their stories about how they got to where they are in their careers today.

This was the Maker Faire’s first college and career day, and it was very successful. The various speakers Mr. Dougherty brought in to talk to hundreds of students shared all types of stories about what got them into STEM-related jobs and shared valuable career-related advice to those who attended this special career day.

Of the many speakers at the career day event, two stood out to me. The first was Sarah Boisvert, the founder of the Fab Lab Hub. Ms. Boisvert shared that when President Trump asked IBM CEO Ginni Rometty about her thoughts on the future jobs in America. She told him that “we do not need more coal workers, what we need are “New Collar Workers” referring to the current and future demand for a technically skilled labor force to meet the needs of America’s job market. Ms. Boisvert has written a book entitled “The New Collar Workforce: An Insider’s Guide to Making Impactful Changes to Manufacturing and Training.”

An overview of the book states:

The “new collar” workers that manufacturers seek have the digital skills needed to “run automation and software, design in CAD, program sensors, maintain robots, repair 3D printers, and collect and analyze data,” according to the author. Educational systems must evolve to supply Industry 4.0 with new collar workers, and this book leads the reader to innovative programs that are recreating training programs for a new age in manufacturing.
The author’s call to action is clear: “We live in a time of extraordinary opportunity to look to the future and fundamentally change manufacturing jobs but also to show people the value in new collar jobs and to create nontraditional pathways to engaging, fulfilling careers in the digital factory. If the industry is to invigorate and revitalize manufacturing, it must start with the new collar workers who essentially make digital fabrication for Industry 4.0 possible.”
This book is for anyone who hires, trains or manages a manufacturing workforce; educates or parents students who are searching for a career path, or is exploring a career change.”

Ms. Boisvert told the students in the audience that when she hires people, the first thing she looks for is if they have solid problem-solving skills. She sees that as being a fundamental part of “New Collar” jobs.

The other speaker that stood out to me was on my panel. Janelle Wellons is a young African American woman who initially wanted to be a theoretical mathematician. Her is her bio:

Janelle Wellons graduated from the Massachusetts Institute of Technology with a B.S. in Aerospace Engineering in 2016. After graduating, she moved from her home in New Jersey to Southern California to work at the NASA Jet Propulsion Laboratory (JPL) in Pasadena. At JPL, Janelle works as an instrument operations engineer on the Lunar Reconnaissance Orbiter, the Earth-observing Multi-Angle Imager for Aerosols, and previously on the Saturnian Cassini mission. Her job consists of creating the commands for and monitoring the health and safety of a variety of instruments ranging from visible and infrared cameras to a radiometer. She also serves on an advisory board for Magnitude.io, a nonprofit that creates project-based learning experiences designed for STEM education. When she isn’t working, you can find her playing video games, reading, enjoying the outdoors, and working on cool projects out in the Mojave.

As a young African American woman, she is an inspiration to kids of all ages and ethnical backgrounds, and she reminded me of the woman in Hidden Figures, Katherine Johnson, who also worked for NASA and was instrumental in working on John Glenn’s Earth Orbit in 1962.

As she spoke, I was watching the kids in the audience, and they were spellbound listening to her tell them that anyone can achieve their goals if they put their minds to it.

The Maker Movement and Maker Faires are critical to our future. Our world is changing rapidly. Job skills of the past need to be updated to meet the changing needs of a world driven by information and analytics and manufacturing jobs that will require new skills to operate. If you get a chance to go to a Maker Faire in your area, I highly recommend you check one out. You won’t be disappointed and like, me, will learn a lot and perhaps be inspired to become a maker yourself.

Podcast: HP PCs, WiFi Mesh Standard, Blockchain And Cryptocurrency, Autonomous Cars

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell discussing HP’s new PCs, the WiFi Mesh standard, blockchain and cryptocurrencies, and the outlook for autonomous cars.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

A Lot Needs to Happen Before Self-Driving Cars Are A Reality

Self-driving vehicles represent one of the most fascinating fields of technology development today. More than just a convenience, there is the potential to radically alter how we work and live, and to alter the essential layout of cities. Their development involves a coalition of many disciplines, a marriage of the auto industry’s epicenters with Silicon Valley, and is attracting hundreds of billions of dollars in annual investment, globally. Exciting as all this is, I think the viability a full-fledged self-driving car in the day-to-day real world is further off than many believe.

My ‘dose of reality’ is driven by two main arguments. First, I think that we’ve underestimated the technology involved in making true autonomous driving safe and reliable, especially in cities. Second, there are some significant infrastructure investments that are required to make our streets ‘self-driving ready’. This requires significant public sector attention and investment which, except for a select few places, is not happening yet.

Self-driving cars have already logged lots of miles and have amassed an impressive safety record, with a couple of notable and unfortunate exceptions. But most of this has occurred on test tracks, along the wide-open highways of the desert southwest, and in Truman Show-esque residential neighborhoods.

For a sense of the real world that the self-driving car has to conquer, I encourage you to visit my home town of Boston. Like most of the world, it’s not Phoenix or Singapore. It has narrow streets, an un- grid-like/not intuitive layout, and a climate where about half of the year’s days feature wet, snowy, or icy roads. Sightlines are bad, lane lines are faded, and pedestrians and cyclists compete for a limited amount of real-estate. In other words, a fairly typical older city. How would a self-driving car do here?

To get even more micro, I’ll take you to a very specific type of intersection that has all of the characteristics deigned to trump a self-driving vehicle. In this situation, the car has to make a left turn from a faded turn lane with no traffic light, and then cross a trolley track, two lanes of oncoming traffic, and a busy crosswalk. So we have poor lane markings, terrible sight lines, and pedestrians moving at an uneven pace and doing unpredictable things, before even getting into the wild cards of weather, glare, and so on. My heart rate quickens every single time I have to make this turn. I would want to see a car successfully self-perform this left turn a Gladwellian 10,000 times before I’d sign that waiver.

I’m sure each of you can provide countless examples of situations that would prompt the question of “can a self-driving car really handle that”? It shows the complexity and the sheer number of variables involved in pulling this off. Think of all the minor decisions and adjustments you make when driving, particularly in a congested area. Rapid advancements in AI will help.

This is not to diminish the mammoth progress that has been made on the road to the self-driving vehicle. The technology is getting there for self-driving cars to be viable in many situations and contexts, within the next five years. It’s that last 10-20% of spots and situations that will prove particularly vexing.

If we believe the self-driving car could be a game-changer over the next 20 years, I think we need to be doing a lot more thinking about the infrastructure required to support its development. We all get excited about how the potential benefits self-driving/autonomous vehicles will usher in, such as changes to the entire model of car ownership, less congested roads, the disappearance of parking lots, etc. This exciting vision assumes a world where the self-driving car is already mainstream. But I think it’s a bit naïve with regard to the type of investment that is needed to help make this happen. This is going to require huge public sector involvement and dollars in many fields and categories. As examples: improvements to roads to accommodate self-driving cars (lane markings, etc.); deployment of sensors and all sorts of ‘smart city infrastructure’; a better ‘visual’ infrastructure; a new regulatory apparatus, and so on. And of course, we will need advanced mobile broadband networks, combination of 5G with vehicle-centric capabilities envisioned by evolving standards such as V2X, to help make this happen.

This will be a really exciting field, with all sorts of job opportunities. There’s the possibility of a game-changing evolution of our physical infrastructure not seen in 100+ years. But worldwide transportation budgets are still mainly business-as-usual, with sporadic hot pockets of cities hoping to be at the bleeding edge.

Getting to this next phase of the self-driving car will require a combination of pragmatism, technology development, meaningful infrastructure investment, and a unique model of public-private cooperation.

 

Will the Gig Economy help Moms to have it All?

This past Sunday was Mother’s Day in the US and across many countries in Europe including my home country of Italy. As I was waking up in a hotel room miles away from my family I felt a whole bunch of emotions: sad I was not home, blessed that I have a husband that supports me in my career and extremely lucky to be in a job I love.

Thanks to the jet-lag I had plenty of time to think about my fellow moms and how much things have changed since I was growing up and my mom was a working mom. At the same time, some of the stigmas of a working mom are still there. Whether you are working, like my mom did, to contribute to the family income, or because you want a career, some people still see you as not putting your children first. And if you are taking a break to be with your kids in their foundation years, you are dealing with the judgment of not putting yourself first. I thought of my circle of fellow moms and made a mental list of how many successful business women I know, how many are the primary bread-winner in the family and how many, now that the kids are grown up would like to get back to work. It is a good healthy mix of women who, no matter where they sit in the group, support one another.

The “Motherhood Penalty”

Whether you are a working mom, or you are a mom who took time off to be with her kids as they grow up, I am sure you have stories about management taking for granted you would not be giving one hundred percent after you gave birth and that if you were leaving your career you had never been committed to it in the first place. If you have been lucky to have a supportive work environment, it might come as a surprise to hear about the “motherhood penalty.”

Data is showing that being a woman is only part of the pay gap we currently see across so many segments. The Institute for Fiscal Studies has found that before having a child the average female worker earns 10% to 15% less per hour than a male worker; after childbirth that increases steadily to 33% after around 12 years. This has financial and economic implications but also emotional ones. The “motherhood penalty,” helps to explain why women overall make 81 cents on every dollar a man earns. Conversely, research has shown that having children raises wages for men, even when correcting for the number of hours they work.

What is the Gig Economy?

The best way to describe the gig economy is the new economy that is developing outside the traditional
Simply put, the modern economy is the one evolving beyond the constraints of conventional work models. Services enabled by the app economy have opened up opportunities for people to earn a living in a much more flexible work environment. While in Silicon Valley many participating in the gig economy do so out of necessity to be able to afford the high cost of living, leading to high criticism and calls of exploitation, the concept is indeed one that opens opportunity.

According to a recent study by the McKinsey Global Institute, up to 162 million people in the United States and Europe are involved in some form of independent work. Members of the gig economy, from ride shares to food deliveries to dog walking and child care services are not employees of the company that pays them but rather they are independent contractors. Instead of working 9-to-5 for a single employer, they are leveraging their advantages to maximize their earning opportunity while balancing it around their personal needs.

While of course, many jobs in the gig economy do not include traditional benefits they might be the best fit for moms returning to work.

Be Your Own Boss

Mothers returning to work are chronically underpaid and undervalued for their experience and ability. PwC’s November 2016 report into women returning to work found that nearly 65% of returning professional women work below their potential salary level or level of seniority.

According to new research, that gap hasn’t narrowed at all since the 1980s. And for some women, it’s even increased. The study found that when correcting for education, occupation and work experience, the pay gap for mothers with one child rose from 9% in the period between 1986 and 1995 to 15% between 2006 and 2014. For mothers with two kids, the gap remained steady at 13% and stayed at 20% for mothers with three or more kids. The researchers point to a lack of progress on family-friendly policies in the United States, such as paid parental leave and subsidized childcare. Other countries, including Sweden, have narrowed their gender pay gaps after instituting such laws.

Considering how little regulations and companies’ attitude to child care and parental leave have progressed and accounting for the changes that the workplace is undergoing to appeal to the younger millennials, getting back in the game must be daunting for those moms who took a break from their career. The gig economy might offer the best opportunity to them and not just in regards to flexibility but also regarding rediscovering what they want to do and earning the best money.

From marketing to payment methods, to service delivery, technology advancements can make being your own boss much easier than it ever was. This option, of course, does not mean those big companies are off the hook when it comes to improving the level of support moms get at work and when it comes to the pay gap. All it means is that women returning to work after having kids do not have to settle anymore in a job that is not adequately paid or does not help them fulfill their full potential.

Device Independence Becoming Real

For decades, compute devices and the tech industry as a whole were built on a few critical assumptions. Notably, that operating systems, platforms, applications, and even file formats were critical differentiators, which allowed companies to build products that offered unique value. Hardware products, software, and even services were all built in recognition of these differences and, in some instances, to bridge or overcome them.

Fast forward to today, and those distinctions are becoming increasingly meaningless. In fact, after hearing the forward-looking strategies of key players like Microsoft, Google, and Citrix at their respective developer and customer events of the past week, it’s clear the world of true device and platform independence is finally becoming real.

Sure, we’ve had hints at some of these developments before. After all, wasn’t browser-based computing and HTML5 supposed to rid the world of proprietary OS’s, applications and file types? All you needed was a browser running on virtually any device, and you were going to be able to run essentially any application you wanted, open any file you needed, and achieve whatever information-based goal you could imagine.

In reality, of course, that utopian vision didn’t work out. For one, certain types of applications just don’t work well in a browser, particularly because of limitations in user interface and interaction models. Plus, it turned out to be a lot harder to migrate existing applications into that new environment, forcing companies to either rebuild from scratch or abandon their efforts. The browser/HTML5 world was also significantly more dependent on network throughput and centralized computing horsepower than most realized. Yes, our networks were getting faster, and cloud-based data centers were getting more powerful, but they still couldn’t compare to loading data from a local storage device into onboard CPUs.

Since then, however, there have been a number of important developments not just in core technologies, but also in business models, software creation methodologies, application delivery mechanisms, and other elements that have shifted the computing landscape in a number of essential ways. Key among them is the rise of services that leverage a combination of both on-device and cloud-based computing resources to deliver something that individuals find worthy of value. Coincident with this is the growing acceptance to pay for software, services, and other information on an ongoing basis, as opposed to a single one-and-done purchase, as was typically the case with software in the past.

Admittedly, many of these services do still require an OS-dependent application at the moment, but with the reduction of meaningful choices down to a few, it’s much easier to create the tools necessary to make the services available to an extremely wide audience. Plus, ironically, we are finally starting to see some of the nirvana promised by the original HTML5 revolution. (As with many things in tech—timing is everything….) Thanks to new cloud-based application models, the use of containers to break applications into reasonably-sized parts, the growth in DevOps application development methodologies, the rise in API usage for creating and plugging new services into existing applications, and the significantly larger base of programmers accustomed to writing software with these new tools and methods, the promise of truly universal, device-independent services is here.

In addition, though it may not appear that way at first glance, the hardware does still matter—just in different ways than in the past. At a device level, arguably, individual devices are starting to matter less. In fact, in somewhat of a converse to Metcalfe’s Law of Networks, O’Donnell’s Law of Devices says that the value of each individual digital device that you own/use decreases with the number of devices that you own/use. Clearly, the number of devices that we each interact with is going up—in some cases at a dramatic rate—hence the decreased focus on specific devices. Collectively, however, the range of devices owned is even more important, with a wider range of interaction models being offered along with a broader means of access to key services and other types of information and communication. In fact, a corollary of the devices law could be that the value of the device collection is directly related to the range of physical form factors, screen sizes, interaction models, and connectivity options offered to an individual.

The other key area for hardware is the amount and type of computing resources available outside of personally owned devices. From the increasing power and range of silicon options in public and private data centers powering many of these services, to the increasing variety of compute options available at the network edge, the role of computing power is simply shifting to a more invisible, “ambient” type role. Ironically, as more and more devices are offered with increasing computing power (heck—even ARM-based microcontrollers powering IoT devices now have the horsepower to take on sophisticated workloads), that power is becoming less visible.

So, does this mean companies can’t offer differentiated value anymore? Hardly. The trick is to provide the means to interconnect different pieces of this ambient computing background (as Microsoft CEO Satya Nadella said at Build last week, the world is becoming a computer) or to perform some of the specific services that are still necessary to bridge different aspects of this computing world. This is exactly what each of the companies mentioned at the beginning of this article discussed at their respective events.

Microsoft, for their part, described a world where the intelligent edge was growing in importance and how they were creating the tools, platforms, and services necessary to tie this intelligent edge into existing computing infrastructures. What particularly struck me about Microsoft’s approach is that they essentially want to serve as a digital Switzerland and broker connections across a wide variety of what used to be competitive platforms and services. The message was a very far cry from the Microsoft of old that pushed hard to establish its platform as the one true choice. From enabling connections between their Cortana assistant and Amazon’s Alexa in a compelling, intriguing way, to fully integrating Android phones into the Windows 10 experience, the company was clearly focused on overcoming any kinds of gaps between devices.

At I/O, Google pushed a bit harder on some of the unique offerings and services on its platforms, but as a fundamentally cloud-focused company, they have been touting a device-independent view of the world for some time. Like Microsoft, Google also announced a number or AI-based services available on its Google Cloud that developers can tap into to create “smarter” application and services.

Last, but certainly not least, Citrix did a great job of laying out the vision and effort it has done to overcome the platform and application divides that have existed in the workplace for decades. Through their new Citrix Workspace app, they presented a real-world implementation of essentially any app, running on any device from any location. Though that concept is simple—and clearly fits within the device independence theme of this column—the actual work needed to do it is very difficult. Arguably, the company has been working on delivering on this vision for some time, but what was compelling about their latest offering was the elegance of the solution they demonstrated and the details they made sure were well covered.

A world that is less dependent on individual devices and more dependent on a collection of devices is very different than where we have been in the past. It is also, to be fair, not quite a reality just yet. However, it’s become increasingly clear that the limitations and frustrations associated with platform or application lock-in are going away, and we can look forward to a much more inclusive computing world.

The Missing Link in VR and AR

VR and AR are big buzzwords in the world of tech these days. At Tech.pinions we have been covering these technologies for over five years and shared solid perspectives on significant AR and VR products if we feel they move this technology forward.

All of our team has tried out or tested most of the available AR and VR products on the market today, and at least in my case, I only see their value at the moment in vertical markets. This is especially true for VR. Apple and Google have tried to bring AR to a broader audience but here too, AR delivered on a smartphone is still a novelty and is most acceptable when used in games like Pokemon Go and some vertical markets.

As I have written in multiple columns over the last year, I have shared my excitement for AR, especially after seeing some cool AR applications in the works that should be out by the end of the year. Although they are still delivered via a smartphone, AR Kit and AR Core are giving software developers the tools to innovate on IOS and Android and in that sense I see the possibility of a broader interest in AR later this year. I also expect Apple to make AR one of the highlights of their upcoming developer conference in early June.

However, I feel the most effective way to deliver AR will be through some form of mixed reality glasses. While the intelligence to power these AR glasses may still come from the smartphone, these glasses will be an extension of that smartphone screen and deliver a better way to view AR content, than can just be provided on a smartphone screen.

I see glasses as the next evolution of the man-machine interface and technology that will be extremely important to billions of people over the next ten years. In my recent Fast Company column, I shared how I believe Apple would tackle the AR opportunity and how they could be the company who defines AR based glasses market.

But if you have used any of the VR or Mixed reality headsets or glasses so far, you understand that interacting with the current models are difficult when you have to use a joystick or handheld wand that is needed to communicate with any of the features or actions in any given VR or AR application. Even more frustrating is that these handheld interfaces do not deliver pin-point precision yet, which makes it often difficult to activate any of these AR or VR applications functions.

I believe there are three high hurdles to get us to where AR is valuable and is acceptable by mass-market users. The first is creating the types of glasses or eyewear that are both fashionable and functional. Todays VR and AR glasses or goggles make anyone who uses them look like nerds. In our surveys, this type of eyewear is panned by people we have talked to about what is acceptable to wear for long periods of time.

The second most significant hurdle will be how the wireless technology used in these smartphones are designed to communicate with what I call “skinny glasses.” This is where the glasses rely pretty much on the smartphone for its intelligence. Getting the wireless connections and applying the smartphone’s functions and intelligence to these glasses will be difficult but critical if we want to have the types of AR glasses that people will wear and not make them stand out as some tech dweeb.

But the missing link that gets little attention when we talk about VR and AR will be the way we interact with these glasses to get the kinds of functions we want and need to make these headsets valuable. Undoubtedly voice commands will be part of this interface solution, but there are too many occasions where calling out commands will not be acceptable, such as while in a meeting, at church or a concert, or in a class, to name just a few.

Indeed, we will need other ways to activate applications and interact with these glasses, which most likely will include things like gestures, object recognition via sensors and through virtual gloves or hand signals such as those created by Magic Leap to navigate their specialized mixed reality headset.

However, I believe this is an area ripe for innovation. For example, a company called TAP just introduced a Bluetooth device that fits over four fingers and lets you tap out actual words and characters as a way to input data into existing applications such as word, or eventually virtual applications on a mixed reality headset.

The folks from Tap came by and gave me a demo of this product, and I found it very interesting. There is a real learning curve involved to understand how to tap out the proper letters or punctuation marks, but they have great teaching videos as well as a teaching game to help a person master this unique input system. Check out the link I shared about to see how it works. They are already selling thousands to vision-impaired folks and others in which using a virtual keyboard like TAP are needed for a specific app or function.

But after seeing TAP, I realized that creating a powerful way to interact with AR apps on glasses should not be limited to joysticks, virtual gloves, voice commands or gestures. This missing link needs out of the box thinking like TAP has done. Hopefully, we will see many other innovations in this space as tech companies eventually deliver mixed reality glasses that are acceptable to all users and drive the next big thing in man-machine interfaces.

Podcast: Microsoft Build, Citrix Synergy, Google I/O

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news and impact of Microsoft’s Build developer conference, the Citrix Synergy customer conference, and Google’s I/O developer conference.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Microsoft Pushes Developers to Embrace the MS Graph

Microsoft talked about a lot of interesting new technologies at this week’s Build developer conference, from artificial intelligence and machine learning to Windows PCs that work better with Android and Apple smartphones, to some smart new workflow features in Windows 10. But one of the underlying themes was the company’s push to get developers to better leverage the Microsoft Graph. This evolving technology shows immense promise and may well be the thing that keeps Microsoft front and center with consumers even as it increasingly focuses on selling commercial solutions.

Understanding the Graph
The Microsoft Graph isn’t new-it originated in 2015 within Office 365 as the Office Graph-but at Build the company did a great job of articulating what it is, and more importantly what it can do. The short version: the Graph is the API for Microsoft 365. More specifically, Microsoft showed a slide that said the Graph represents “connected data and insights that power applications. Seamless identity: Azure Active Directory sign-in across Windows, Office, and your applications. Business Data in the Graph can appear within your and our applications.”

Microsoft geared that language to its developer audience, but for end users, it means this: Whenever you use Microsoft platforms, apps, or services-or third-party apps and services designed to work with the Graph-you’ll get a better, more personalized experience. And that experience will get even better over time as Microsoft collects more data about what you use and how you use it.

The Microsoft Graph may have started with Office, but the company has rolled it out across its large and growing list of properties. Just inside Office 365 there’s SharePoint, OneDrive, Outlook, Microsoft Teams, OneNote, Planner, and Excel. Microsoft’s Azure is a cloud computing service, and Azure’s Graph-enabled Active Directory controls identity and access management within an organization. Plus, there’s Windows 10 services, as well as a long list of services under the banner of Enterprise Mobility and Security services. And now that company has rolled it into many of its own products; it is pushing its developers to begin utilizing the Graph, too.

Working Smarter, and Bridging Our Two Lives
The goal of the Microsoft Graph is to drive a truly unique experience for every user. One that recognizes the devices you use, and when you use them. One that figures out when you are your most productive and serves up the right tools at the right time to help you get things done. One that eventually predicts what you’ll need before you need it. None of it is quite as flashy as asking your digital assistant to have a conversation for you, but it’s the type of real-world advances that technology should be good at doing.

What’s also notable about the Microsoft Graph is that while it focuses almost entirely on work and productivity, these advances should help smooth friction outside of work, too. If we work smarter, perhaps we can work less. Inside this is Microsoft’s nod to the fact that while it still has consumer-focused businesses such as Xbox, most people will interact with its products in a work setting. That said, most of us have seen the lines between our work and non-work life blur, and the Graph should help drive continued and growing relevance for Microsoft as a result.

Don’t Forget Privacy
Of course, for all this to work Microsoft must collect a large amount of data about you. In a climate where people are starting to think long and hard about how much data they are willing to give up to access next-generation apps and services, this could be challenging. Which is why throughout Build Microsoft executives including CEO Satya Nadella made a point of driving home the company’s stance on data and privacy. Nadella called privacy a human right, and in discussing the Microsoft Graph both on stage and behind closed doors, executives Joe Belfiore and Kevin Gallo noted that this information ultimately belongs to the end user and it is up to Microsoft to keep it private and secure.

The privacy angle is one I expect to see Microsoft continue to push as it works to leverage the Graph in its ongoing battles with Google and Facebook. (I expect Apple will hammer home its stance on the topic at the upcoming WWDC, too.) In the meantime, it will be interesting to see if Microsoft’s developers buy into the promise of the Graph, and how long it will take for their subsequent work to come to fruition. By next year at this time, we may be hearing less about the potential of this technology, and more about end users enjoying the real-world benefits.

Google creates some spin with TPU 3.0 announcement

During the opening keynote to Google I/O yesterday the company announced a new version of its Tensor Processing Unit, TPU 3.0. Though details were incredibly light, CEO Sundar Pichai claimed that TPU 3.0 would have “8x the performance” of the previous generation and that it was going to require liquid cooling to get to those performance levels. Immediately much of the technical media incorrectly asserted an 8x architectural jump without thinking through the implications or how Google might have come to those numbers.

For those that might not be up on the development, Google announced the TPU back in 2016 as an ASIC specifically targeting AI acceleration. Expectedly, this drew a lot of attention from all corners of the field as it marked not only one of the first custom AI accelerator designs, but it was also from one of the biggest names computing. The Tensor Processing Unit targets TensorFlow, a library set for machine learning and deep neural networks developed by Google. Unlike other AI training hardware, that does limit the use case for TPU to customers of Google Cloud products and only TensorFlow based applications.

They are proprietary chips and are not available for external purchase. Just a few months ago, it leaked from the New York Times that Google would begin offering access to TPUs through Google Cloud services. But Google has no shortage of use cases for internal AI processing that TPUs can address from Google Photos to Assistant to Maps.

The liquid cooling setup for TPU 3.0

Looking back to the TPU 3.0 announcement yesterday, there are some interesting caveats about the claims and statements Google made. First, the crowd cheered when it heard this setup was going to require liquid cooling. In reality, this means that there has been a dramatic reduction in efficiency with the third-generation chip OR they are being packed much more tightly in these servers without room for traditional cooling.

Efficiency drops could mean that Google is pushing the clock speed up on the silicon, ahead of the optimal efficiency curve to get that extra frequency. This is a common tactic in ASIC designs to stretch out performance of existing manufacturing processes or close the gap with competing hardware solutions.

Liquid cooling in enterprise environments isn’t unheard of, but it is less reliable and costly to integrate.

The extremely exciting performance claims should be tempered somewhat as well. Though the 8x improvement and statement of 100 PetaFLOPS of performance are impressive, it doesn’t tell us the whole story. Google was quoting numbers from a “pod”, the term the company uses for a combination of TPU chips and supporting hardware that consume considerable physical space.

A single Google TPU 3.0 Pod

TPU 2.0 pods combined 256 chips but for TPU 3.0 it appears Google is collecting 512 into a single unit. Besides the physical size increases that go along with that, this means relative performance for each chip of TPU 3.0 versus TPU 2.0 is about 2x. That’s a sizeable jump, but not unexpected in the ever-changing world of AI algorithms and custom acceleration. There is likely some combination of clock speed and architectural improvement that equate to this doubling of per-chip performance, though with that liquid cooling requirement I lean more towards clock speed jumps.

Google has not yet shared architectural information about TPU 3.0 and how it has changed from the previous generation. Availability for TPU 3.0 unknown but even Cloud TPU (using TPU 2.0) isn’t targeted until the end of 2018.

Google’s development in AI acceleration is certainly interesting and will continue to push the industry forward in key ways. You can see that exemplified with NVIDIA’s integration of TensorCores in its Volta GPU architecture last year. But before the market gets up in arms thinking Google is now leading the hardware race, its important to put yesterday’s announcement in the right context.

Microsoft Build & Google i/o: Compare and Contrast

It would make an excellent title for a school essay, don’t you think? As much as this week was a bit of a logistical nightmare due to the two developer conferences overlapping, it was a great opportunity to directly compare the approach these two companies are taking on what has become a core area of their business: AI.

There was a lot covered in both companies’ keynotes, but for me, there were three key areas where similarities and differences were clear.

Assistants as Active Players in Human Interactions

In very different ways, assistants on both stages played an active role in human interactions. These assistants were not just voices I asked questions to, but they were providing proactive assistance in a particular situation.

Microsoft showed a compelling workplace demo where Cortana was able to greet meeting participants by name as they entered the room or joined the call. With visual support (another common theme) she was able to help the conversation by taking notes for a hearing-impaired participant, scheduling a to-do list, and setting up reminders.

Google showed a more conversational assistant that now can perform linked requests and continues to improve its voice to sound more human, even giving you the choice of John Legend as one of the voices. The demo that stole the show and pushed our current idea of the assistant to the next level was Google Duplex.

With Duplex, Google Assistant helped find a hairdresser and a restaurant and then placed a call to book an appointment. This feature will be available later in the summer but had the whole audience divided between “wow” and “eek.” There was also some skepticism given how Google Assistant at home still feels quite basic in comparison.

The exchange with both the hairdresser and the restaurant was very natural but when you listened to it a few times you can easily spot that the Assistant was picking up the key points in the task like date and time. It was a focused interaction which is very different from what we have at home where we can invoke Google Assistant to ask her anything from the weather to how to make fresh pasta.

Some people were apparently uncomfortable with the call because the people on the other end of the line were unaware they were talking to a bot. With the hairdresser case, Google Assistant did say she was making an appointment for her client which I thought was a nice way to let the receptionist know she was talking to someone else. It is tricky in this initial phase to balance disclosure with having the service used and accepted. I am sure most people would hang up if the conversation started with: “Hi, I am Google Assistant.”

I found it interesting, as I spoke to others about this demo, that the reason they wanted to be made aware they were talking to Google Assistant was that they wanted to know what data was captured and shared, not necessarily because they did not want to interact with a bot. Google did not talk about that aspect on stage.

This area is still so new to us that companies will need a lot of trial and error to figure out what we are comfortable with. Already today Microsoft Cortana can schedule a meeting for you in Outlook by contacting the people who need to participate and finding a time that works for everybody. The email that is exchanged says Cortana as a sender, but people seem more comfortable with that maybe because it is confined to email and it does not make our “rise of the machines” alarm bells go off.

There is no question in my mind though, that agents, assistants, bots, call them what you like, will become more active players in our life as they get smarter and improve their context awareness. We humans will have to learn how to make them part of that mix in a way that is not only socially acceptable to us but also to others involved.

Privacy and Ethics

Since the Facebook Cambridge Analytica debacle, the level of scrutiny tech companies have been under when it comes to privacy has increased. This coupled with The General Data Protection Regulation (GDPR) roll out in Europe, has also raised expectations on the level of transparency displayed by companies in this area. There was a definite difference between how Microsoft and Google addressed privacy on stage. Microsoft made a clear and bold statement about safeguarding users privacy with CEO Satya Nadella saying:

“We need to ask ourselves not only what computers can do, but what computers should do,” he said. “We also have a responsibility as a tech industry to build trust in technology.”

Adding that the tech industry needs to treat issues like privacy as a “human right.” Nadella’s statement echoed what Apple’s CEO Tim Cook said during a recent MSNBC interview:

“We care about the user experience. And we’re not going to traffic in your personal life. I think it’s an invasion of privacy. I think it’s – privacy to us is a human right.”

Their stand should not be a surprise as both companies share a business model based on making a profit directly from the services and products they bring to market.

At Google i/o there was no explicit mention of privacy, but Sundar Pichai stated at the very start of the keynote:

“It’s clear that technology can be a positive force, but we can’t just be “wide-eyed” about the potential impact of these innovations…We feel a deep sense of responsibility to get these things right.”

When he said that, Pichai was referring to AI in particular.

Some were quick to argue that Google did not make a clear statement on privacy simply because they cannot do so given the nature of their business model. I would, however, argue that it was not so much a case of not throwing stones in a glass house but rather not wanting to come out with an empty promise or a statement that would have seemed defensive when Google has not breached our trust. I think this point is important. Even when talking about what was introduced at Google i/o, many spoke of the price users pay to the benefit of these new smart apps and services: personal data. While this is true, it is also true that you have a choice not to use those services or not to share that data. Quite different from Facebook where people were aware their information would be used to target advertising but they were not aware they were being tracked outside of Facebook. I also believe consumers see a greater ROI from sharing data with Google than they do with Facebook, which raises their level of tolerance on what they want to let Google get access to.

Where Google felt comfortable going after Facebook was fake news and their revamped Google News service that will double down on using AI to deliver better quality, more targeted stories.

In my mind, asking Google to make a statement on privacy was the same as asking Google to change their business model. This is not going to happen any time soon and certainly not on a developer conference stage. What I would like to hear from Google more, however, is what data is seen, stored and retained across all the interactions we have. GDPR will force their hand a little in doing that, but a more proactive approach might score them brownie points and not have them be lumped in with Facebook in the time out corner.

Consumer vs. Business

Both companies had several announcements over the course of their opening keynotes that will impact business and consumers alike. Both companies had “feel good” stories that showed how wonderful tech could be when it can positively affect people’s lives. Yet, the strong consumer focus we saw at Google i/o had the audience cheering, wowing and clapping with a level of engagement that Microsoft did not see. This is not about how good the announcements were but rather how the audience could directly relate to those announcements.

Given the respective core focus of the two companies, it is to be expected that Microsoft focused more on business and Google on consumers. That said, I always argued that talking about users is not just good for the soul it is good for business especially when you are talking to developers. Business class apps are important and in some cases, they can be more remunerative than consumers apps for developers. Most business apps are, however, developed in-house by enterprises or by third parties that grant access to them through their services.

For Microsoft being able to talk about consumers is vital to attracting developers and there was not much mention of that on stage despite the bigger interest in the user within the enterprise. Microsoft did announce an increase in revenue share with developers now taking 95% of the revenue compared to the 85% they get from Apple and Google. Believing Microsoft is still committed to consumers though, might go a long way to get developers on board in the first place.

 

We have a little bit longer to wait and hear what Apple will have to offer developers, but I have a sneaky feeling that privacy, AI, and ML will all be core to their strategy while I am less clear about the extent that cloud will play a role.