Chromebooks, iPads, and the Desire for New Computing Platforms

I recently got my hands on Google’s Pixel 2 Chromebook. I have wanted to use the Pixel 2 for some time and test it in my everyday computing workflows. There is so much to like about the Chromebook platform. It’s fast, fresh, and feels extremely modern. Much more modern than Windows or macOS/OS X. But it is the speed, lack of clutter, and overall fresh feeling of the OS that I like best. After a few weeks with the device I can see how you can make a strong case an operating system like this has more legs for the future of notebooks, and maybe desktops than Windows or macOS/OS X. Except apps.

I wanted to love the Pixel 2, but the Android app ecosystem is not built for the larger notebook size screens. Apps like Office, or Slack were sufficient, not ideal nor perfect, but sufficient. Apps like Twitter, or Facebook, were all pretty terrible. The more apps I tried to do things like edit photos, or make movies, and many of the things I do on my Mac regularly, I found apps galore which were all made for the smartphone, not a notebook screen. The lack of developer optimization of Android apps is the thing that hurt Android tablets and the thing that still hurts the Chromebook. But as I said, I wanted to love it.

The reason I wanted to love it was because it was free of all the legacy crust that sits on Windows and macOS. No consistent pop-up windows or annoying alerts and system notifications. None of the junk that gets loaded at boot up that makes your computer take seconds to minutes to start up. It was fast, crisp, and refreshing to use from an operating system standpoint. The active Google Assistant as a chat window is fantastic as well. Similar features exist in Windows with Cortana, and Siri is deployed only as a voice assistant, not one you can type to or chat with.

The overall feeling, and experience, I could not get past was just how fast and clean/fresh Chrome OS feels. It did make Windows and macOS feel like legacy operating systems. That being said, have the same feeling when I use iPad to a degree. The lure of iPad as my primary computer has always been based on the fresh, fast, and non-legacy feel of iOS. Add to that, a large number of apps which have been optimized have the larger screen and iPad gives the feeling of a fresh new computer (not just smartphone) operating system.

New vs. Old
There is a divide coming that I can only describe of new vs. old computing paradigms. So many people who have spent years, and often decades, using legacy computing environments like Windows and macOS (OS X) have developed behavioral debt. These entrenched behaviors and trusted workflows which have been established through years of repetition do not flow well into the more modern and new environments that iPad and Chromebooks present. However, when you look at younger millennials and the entire cohort of Gen Z, you see very different workflows being developed which all translate very well to Chromebooks and iPad.

At this point, I do not believe Microsoft can make Windows fit with the modern era that young people are gravitating to, and I don’t believe Apple can make macOS fit this need either. Both Windows and macOS will still exist and remain for the many millions of people who will always be more comfortable working with a legacy operating system on their notebook or desktop. But I’m not as convinced the next generations will settle on Windows or macOS but rather will seek out solutions running operating systems that are more suited for their workflows, which I believe are Chromebooks and iPads.

App Based Workflows
The big change, at least from my experience and behavioral observations with Chromebooks is how much the workflow moves to apps primarily. Certainly, apps are used in Windows and macOS, but the browser is also heavily used. I’ve noticed that Chrome OS and iPad move most the workflows to apps primarily, with much less browser involved. Personally, I love this, but many of the core workflows I live on have not translated yet to a Chromebook or iPad only world, so I still find myself using my Mac more than either.

I, like many others, did try and move to iPad entirely as my notebook but sadly returned to the Mac because I still needed it for so many tasks in my day-to-day. My kids are different stories. They never built up workflows that required a legacy operating system like Windows or macOS and therefore are entirely comfortable living only in Chromebooks and iPads. Both my daughters will be in a 1:1 iPad school next year so both will use iPad full time as their notebook.

In the long run, I maintain that iPad, and to a degree Chromebooks, success depends entirely on developers embracing the new things it allows them to create and not just trying to recreate old computing paradigms or workflows and make them work on iPad or Chromebooks. The potential for both these devices rests solely with developers, and while good strides have been made, on iPad specifically, there is still a long way to go.

I don’t think either of these devices will replace Windows PCs or Macs anytime soon in the workplace. I do, however, believe there is that distinct possibility but it will take five years or longer before we see any real traction. And that traction, or pull by the employee to the employer, may come largely from college kids and younger today as they enter the workforce. This may create a fascinating computing divide between millions of young people using modern workflows and the legacy computing generations using legacy workflows.

Consumer Privacy and Impact to Facebook’s Business Model

Oh the joy of publishing research. Thanks Business Insider for taking our data out of context and publishing this headline.

I’m not going to link to it because hopefully all of our readers read Carolina’s article where she interpreted the data and kept it into context. Let me briefly explain why the headline, not the data, is incorrect.

Firstly, 1-10 American’s do not have a Facebook account. Facebook’s last reported number was 214 million accounts in the US. A percentage of those are business account/pages so we have to keep that in mind when thinking total US FB accounts. Interestingly in our study, 21% of respondents said they do not have a Facebook account. So realistically 6-10 American’s have a Facebook account.

Secondly, we did not use the data to attempt to come up with a general population number of how many accounts in US were deleted. Nor did we ask the timeline of when those who deleted their account did so.

Folks in that 9% may have deleted it last year, or years ago, not necessarily because of recent events. The bottom line, 9% of respondents in our panel deleted their Facebook account at some point in time.

Now let’s say we did try to map our number to general population activity.

One could arrive at that this way: 9% of Facebook’s approximately 200 million US accounts is 18 million, and say that was spread out over the last five years which would like up with Zuckerberg’s comments that recent events had no meaningful impact of people deleting their account. So, had we attempted to map the number the a general population statistic the right way to have said such a thing was 9% of American’s deleted their Facebook account at some point in time. This number IS NOT related solely to the most recent months like many assumed our data referred to. Sigh..

Behavior Change, not Account Deletion is the Real Story
The real story here, was the 48% of consumers who said they are consciously using Facebook less, or the 37% who actually changed some privacy settings to limit the amount of data Facebook is collecting on them. As I pointe out in my article from last week on consumer awareness of privacy including tracking, 59% of consumers who took our survey agreed that they were not entirely comforable how good Facebook was getting at tracking their online behavior. There is a privacy awakening happening about these discreet tracking practices of comapnies like Facebook and Google. The big question to me, is whether or not the growing consumer sensitivity to this starts to become an issue for Facebook’s business.

I said in my article last week, that advertising businesses have thrived for decades without the kind of hypertargeting techniques Facebook and Google provide to advertisers. But the interesting thing to me, that became clear the more I thought about it, was the type of ads the Facebook platform attracts. Facebook, is essentially a direct to consumer advertising platform where the ads you see are largely trying to drive a type of conversion by the customer. Companies selling products or services are using Facebook to get their product or service directly in front of a potential customer and hope to inspire action to buy, install, or try something. Facebook, is essentially, a digital platform for junk mail.

With all the data, even the behind the scenes behavior tracking Facebook does, they tout their ability to create highly relevant ads. Mark Zuckerberg has said publicly many times something along these lines “consumers are willing to use a free service in exchange for seeing ads. Our research shows the experience of seeing ads is better if those ads are relevant or intersting to consumers.” Facebook believes the best way to provide these relevant ads is by the deep behavior tracking tools they have built.

So let’s examine how consumers feel about these ads Facebook provides with such rich user profiles.

  • 55% of consumers we surveyed said they don’t like all the ads they see on Facebook
  • Only 14% said they discovered a product or service via a Facebook ad they found valuable
  • 15% said they don’t mind the ads on Facebook

For a company building one of the most sophisticated ad targeting systems to only have 14% of our panel say they discovered a product or service they found valuable from a Facebook ad is a pretty abysmal number. Thus, my analogy that Facebook ads are today’s digital equivalent of junk mail. So what is Facebook to do?

Get to Know Me but Don’t Spy on Me
There is a difference between knowing and spying. When you see an ad on Facebook for a product you searched for yesterday, the exact same product, that is where people feel a line is crossed. Yet we all like finding useful products that make our life easier or better. There has to be a balance on how to do this.

Personally, I am ok with Facebook knowing my interests. What sports I like, food I like, places I like to visit or vacation, etc. Thigns related to my interests are things I want to discover new products or experiences around. This is why magazines remain one of the best advertising mediums. I used to read a lot of magainzes on things like fashion, food, specific sports like Golf or snowboarding, and I constantly found new stuff I wanted to buy. These mediums had a captive audience who were willing to pay for a magazine on a specific subject and it still is a match made in heaven for advertisors. Facebook needs to somehow be more like magazines when it comes to ads.

I’ve never found a valuable product or service from a Facebook ad, and I intentionally pay attention to them as a part of my job. Yet I continually found, and in some cases still find (cooking magazines are the only ones I still read) useful and valuable products from magazine ads. Facebook should technically know significantly more about me than any magazine yet Facebook as yet to connect me with a product or service I care about and magazines continually do. This is a tough point to reconcile and it either speaks to the failure or Facebook’s ad targeting mechansisms or their failure to attract advertisers of really great products or services. Perhaps it is a mix of both.

I do think we may be at an inflection point when it comes to privacy awareness on behalf of consumers. Interestingly, this seems to be much more US centric as consumers in many other countries seem to be quite a bit more privacy conscious than US consumers. It will be interesting, if we are indeed at a privacy inflection point, to see how companies that provide a free service subsidized by ads adjust their overall strategy. There may very well be adjustments made by advertisors which may have negative impacts on their business model.

There is a saying that you can’t serve two masters. This is the quandry Facebook finds itself in. As much as leaders at Facebook want to make the product experience as best as it can be for consumers the reality is their livelyhood as a company depends on advertising. Thus, Facebook’s customer is truly the advertisor first and the consumer second.

AI is no Knight in Shining Armor fighting to save Humanity

Last week during Mark Zuckerberg’s congressional hearing we heard Artificial Intelligence (AI) mentioned time and time again as the one size fits all solution to Facebook’s problems of hate speech, harassment, fake news… Sadly though, many agree with me that we are a long way away from AI to be able to eradicate all that is bad on the internet.

Abusive language and behavior are very hard to detect, monitor, and predict. As Zuckerberg himself pointed out, there are so many different factors that play into making this particular job hard: language, culture, context, all play a role in helping us determine if what we hear, read or see is to be deemed offensive or not.

The problem that we have today with most platforms, not just Facebook, is that humans are determining what is offensive. They might be using a set of parameters to do so, but they ultimately use their judgment. Hence consistency is an issue. Employing humans also makes it much harder to scale. Zuckerberg’s 20,000 people number sure is impressive, but when you think about the content that 2 billion active users can post in an hour, you can see how futile even that effort seems.

I don’t want to get into a discussion of how Zuckerberg might have used the promise of AI as a red herring to get some pressure off his back. But I do want to look at why, while AI can solve scalability, its consistency and accuracy in detecting hate speech in the first place is highly questionable today.

The “feed It Enough Data” Argument

Before we can talk about AI and its potential benefits we need to talk about Machine Learning (ML). For machines to be able to reason like a human, or hopefully better, they need to be able to learn. We teach the machines by using algorithms that discover patterns and generate insights from a massive amount of data they are exposed to so that they can make decisions on their own in the future. If we input enough pictures and descriptions of dogs and hand-code the software with what could look like a dog or be described as a dog, the machine will eventually be able to establish and recognize the next engineered “doodle” as a dog.

So one would think that if you feed a machine enough swear words, racial, religious or sexual slurs, it would be able to, not only detect, but also predict toxic content going forward. The problem is that there is a lot of hate speech out there that uses very polite words as there is harmless content that is loaded with swear words. Innocuous words such as “animals” or “parasites” can be charged with hate when directed to a specific group,of people. Users engaging in hate speech might also misspell words or use symbols instead of letters all aimed at preventing keywords-based filters to catch them.

Furthermore, training the machine is still a process that involves humans and consistency on what is offensive is hard to achieve. According to a study published by Kwok and Wang in 2013, there is a mere 33% agreement between coders from different races, when tasked to identify racist tweets.

In 2017, Jigsaw, a company operated by Alphabet, released an API called Perspective that uses machine learning to spot abuse and harassment online and is available to developers. Perspective created a “toxicity score” for the comments that were available based on keywords and phrases and then predicted content based on such score. The results were not very encouraging. According to New Scientist

“you’re pretty smart for a girl” was deemed 18% similar to comments people had deemed toxic, whereas “I love Fuhrer” was 2% similar.

The “feed It the Right Data” Argument

So, it seems that it is not about the amount of data but rather, about the right kind of data, but how do we get to it? Haji Mohammad Saleem and his team at the University of McGill, in Montreal, tried a different approach.

They focused on the content on Reddit that they defined as “a major online home for both hateful speech communities and supporters for their target groups.” Access to a large amount of data from groups that are now banned on Redditt allowed the McGill’s team to analyze linguistic practices that hate groups share thus avoiding having to compile word lists and providing a large amount of data to train and test the classifiers. Their method resulted in fewer false positives, but it is still not perfect.

Some researchers believe that AI will never be able to be totally effective in catching toxic language as this is subjective and requires human judgment.

Minimizing Human Bias

Whether humans will be involved in coding or will remain mostly responsible for policing hate speech, it is really human bias that I am concerned about. This is different than talking about approach consistency that considers cultural, language and context nuances. This is about having humans’ personal beliefs creep into their decisions when they are coding the machines or monitoring content. Try and search for “bad hair” and see how many images of beautifully crafted hair designs for Black women show up in your results. That, right there, is human bias creeping into an algorithm.

This is precisely why I have been very vocal about the importance of representation across tech overall but in particular when talking about AI. If we have a fair representation of gender, race, religious and political believes and sexual orientation among the people trusted to teach the machines we will entrust with different kind of tasks, we will have a better chance at minimising bias.

Even when we eliminate bias at the best of our ability we would be deluded to believe Zuckerberg’s rosy picture of the future. Hate speech, fake news, toxic behavior change all the time making the job of training machines a never-ending one. Ultimately, accountability rests with platforms owners and with us as users. Humanity needs to save itself not wait for AI.

The Un-Intended Consequences of an Audit of the USPS/Amazon Relationship

Last week President Trump ordered an audit of the USPS’s business model, which he believes is unsustainable. Vanity Fair had a solid take on this:

“After agitating for weeks over ways to make Amazon pay higher postage rates, Donald Trump has demanded a sweeping overhaul of the U.S. Postal Service’s business model. In an executive order Thursday, Trump called for the formation of an administration task force to be chaired by Treasury Secretary Steve Mnuchin, with a report outlining proposed changes delivered within 120 days. “Some factors, including the steep decline in First-Class Mail volume, coupled with legal mandates that compel the U.S.P.S. to incur substantial and inflexible costs, have resulted in a structural deficit,” Trump said. “The U.S.P.S. is on an unsustainable financial path and must be restructured to prevent a taxpayer-funded bailout.”

The article goes on to say that Trump is “off the hook” on this issue. Although not naming Amazon and Jeff Bezos directly, given his tweets on this subject, it is clear that this audit is aimed at trying to hurt Amazon and Bezos specifically with the result being to make it more expensive for Amazon to use the USPS as a part of their delivery service.

The Vanity Fair article goes on to say:

“The obsession isn’t shared by aides: despite Trump’s repeated claims, his advisers have said that the sheer volume of packages Amazon ships through the Postal Service has helped keep it afloat. Amazon and the Postal Service signed a five-year deal in 2013 to deliver packages on Sundays, and both entities have declared the arrangement a success.”

I see this audit as having two major consequences, one unintended. The first result could be to show that, as some of Trump’s aids pointed out, the Amazon deal has been the reason the USPS has lower losses and is part of the reason the USPS is still “afloat” as they have said. However, even with this conclusion, the Audit could recommend that Amazon pay even higher fees when the next contract is negotiated shortly.

If that does happen, it could lead to the unintended consequence of pushing Bezo’s and Amazon to create their dedicated fleet of planes and vans to free themselves from the USPS completely. There have been reports that Bezos and Amazon has been seriously looking at this option and should the USPS, under Trump’s orders, move to charge significantly higher fees for delivering Amazon’s packages, it could be the thing that forces Amazon to create their own delivery service, at least in the US, and move faster to make that happen.

I have been talking to sources at UPS and FedEx, and the prospect of Amazon acquiring their fleet of planes and delivery vehicles of their own keeps them up at night. If Amazon does this, imagine the impact it would have on them and the USPS. In the USPS case, it would cause them to have even deeper loses as well as having to eliminate thousands of people who were added to help with the Amazon contract.

While an audit of the USPS is a good business practice, the motive behind it is not. Trump and team better be ready for an outcome that may not be to their liking. In a business fight between Trump and Bezo’s, my money is on Jeff Bezo’s. One way or the other he has to do what is best for his company and shareholders and creating their fleet of planes and vans to help him own and control the entire supply chain may happen sooner than later thanks to this audit.

The Unseen Opportunities of AR and VR

Some of the most exciting and revolutionary innovations to appear on the scene over the last few years are augmented reality (AR) and virtual reality (VR). At their core, these eye-opening technologies—and the many variations that lie between them—are fundamentally new types of displays that let us see content in entirely different ways. From completely immersing us within the digital worlds of virtual reality, to enhancing our views of the “real world” with augmented reality, the products leveraging these capabilities offer experiences that delight the minds and open the imaginations of most people who try them.

And yet, sales of AR and VR products, and adoption of AR and VR apps to date have been relatively modest, and certainly lower than many had predicted. So, what’s the problem?

To better understand the opportunities and challenges facing the AR/VR market, TECHnalysis Research recently completed an online survey of 1,000 US consumers ages 18-74 who own at least one AR or VR headset. Questions covered a wide range of topics, all intended to learn more about why people bought certain AR/VR devices, how they use them, what they like and don’t like about them, and much more.

The responses revealed a range of different insights—some expected and some surprising—and made it clear that consumers who know about, and have had the opportunity to try, AR and VR products are generally very enthusiastic about them. In fact, the overall tone of the comments made by owners of these devices was surprisingly positive.

A few key facts from the survey results provide a good overview of the AR/VR market. First, most people who have tried AR or VR headsets have used one that leverages a smartphone, with more than twice as many people making that choice over a PC or game console-based option. Standalone headset usage was even lower, at about one-quarter of the number who had tried smartphone-driven solutions. Overall, 76% of respondents had only tried one type of system, while the remaining 24% had used two or more.

The most popular choices among survey respondents were the Samsung Gear VR, Sony PlayStation VR, and “other” smartphone-based headsets, such as the many generic options that were available last holiday season. Interestingly, the Sony PlayStation VR and Samsung Gear VR also had the highest satisfaction levels among owners (see Fig. 1), suggesting both that the products providing the best experience for the money were the most widely purchased, and that the aggressive marketing pursued by these companies has been effective. Around 81% of respondents own one device, but the remaining 19% own an average of 3, highlighting a group of dedicated, and curious, AR and VR enthusiasts (and pushing the overall average number of devices per person to 1.4).

Fig. 1

One of the surprising findings from the study is that the frequency of using headsets was modest, with just 18% saying they used their headsets daily, while 38% reported weekly usage, and the largest group, at 41%, saying they only used them once or twice a month. The average session length across devices was a respectable 38 minutes, but the limited overall usage suggests concerns about limited available content, and an overall sense that, while they like the devices, they don’t like them enough to warrant more frequent usage. This, in turn, raises questions about pricing, because if the products aren’t used that frequently, it’s harder to justify (and harder for consumers to accept) higher prices. In fact, the products that did have the highest percentage of daily users were the HTC Vive, Windows 10 Mixed Reality headsets (from a variety of vendors), and Oculus Rift, all of which are priced higher than most other options on the market.

When respondents were asked to quantify the frequency with which they felt ill or queasy using an AR or VR headset, the numbers point out that this problem still exists. Thankfully, 56% of respondents said they never or only rarely have an issue, but one-third said it happens sometimes (defined as between 10 and 49% of the time they used a device), and 11% of owners said that queasy feelings occur frequently (50-100% of the time). Technology improvements around display refresh rates, reduced latencies and other advancements should reduce these numbers, but it’s clearly still a factor in preventing wider adoption.

Most of the study focused on AR and VR headsets, but the survey also included questions about smartphone-based AR app usage without attached headsets. Given all the hype around the launch of Apple’s ARKit for iOS and Google’s ARCore for Android, there have been high expectations for these new apps, but the survey results confirmed what others have reported: real-world usage is just so-so. About half of respondents use these kinds of apps at least once a month, but the other half either never used these apps or have tried several of them but have essentially given up. Given that the respondent base to this survey is generally enthusiastic about AR and VR and know/care enough about the technology to have purchased an AR/VR headset, the smartphone AR app numbers are definitely disappointing.

One reason for the modest numbers could be related to the most surprising finding of the study. Overall, respondents said they preferred VR over AR by a 3:1 ratio. Given all the industry discussion about how AR is expected to win the long-term technology battle, this certainly flies in the face of conventional thinking. Admittedly, more consumers have likely had exposure to VR than AR, but it was clear from many different types of questions throughout the study that the completely immersive experience offered by VR was one of the most appealing aspects of the technology. It was also surprising to see that the preference was consistent across the different age groups that took the survey (see Fig. 2)

Fig. 2

More than just about any other technology now available, using current AR and VR products highlights the potential future possibilities of what they will be able to do even more than what they currently can do. The ability to see both the real world and entirely new worlds in completely different ways is unquestionably a compelling experience. As the technology and market evolve, the enthusiasm in today’s consumers will only grow. The opportunities may be a bit slow in coming, and the technology is unquestionably in its early days, but there’s little doubt that both will likely surpass our current expectations.

(A free copy of highlights from the TECHnalysis Research AR/VR study are available here.)

Podcast: Facebook Hearings, PC Shipments, GoPro

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the Facebook hearings before Congress, discuss recent PC shipment numbers and new HP gaming PCs, and chatting about the potential sale of GoPro to China’s Xiaomi.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

As Consumer Virtual Reality Lags, Commercial Interest Grows

The Virtual Reality headset market has taken its fair share of lumps in the last 18 months, as the industry has struggled to find the right combination of hardware, software, and pricing to drive consumer demand. But while consumers are proving a hard sell, many in the industry have found an increasing number of companies willing and eager to try out virtual reality for a growing list of commercial use case. A recent IDC survey helps shed some light on this important trend.

Early Testing, and Key Verticals
IDC surveyed 500 U.S. IT Decision Makers between December 18, 2017, and January 19, 2018. One of the key takeaways from the survey: about 58% of IT buyers said their company was actively testing VR for use in the workplace. That number breaks down like this: More than 30% said they were in early testing stages with VR. Almost 18% said they were in the pilot stage. Close to 7% said they were in pilot stages of deployment. And about 3% said they had moved into late-stage deployment of the technology.

Early testing represents the lion’s share of that figure, and obviously, that means different things to different people. In some companies that means a full-fledged testing scenario with different types of hardware, software, and services. For another, it may well mean somebody in IT bought an HTC Vive and is playing with it. But the fact that so many companies are early testing is important to this nascent industry. That also means more than one-quarter of respondents said they were in a pilot stage or later with VR inside their company.

I have written at length in a previous column about the various industries where we see VR playing a role. In this survey, respondents from the following verticals had the highest response rates around VR: Education/Government, Transportation/Utilities/Construction/Resource, and Manufacturing. When we asked respondents who in their company was driving the move to embrace VR, IT managers were the most prominent (nearly 32%) followed by executives (28%) and line of business (nearly 17%).

Key Use Cases, and a Demand for Commercial Grade
Understanding how companies expect to use VR is another key element of the industry moving to support this trend. We asked respondents about their current primary use cases for VR, and the top three included product design, customer service, and employee training. The most interesting of those three to me is employee training. We’ve long expected VR to drive training opportunities related to the high-risk job and those related to high-dollar equipment. Think training firefighters and doctors, as well as people who utilize million-dollar machines. But VR is quickly moving beyond these types of training to include a much broader subset of employees. VR can speed knowledge transfer for everyone from retail salespersons to auto body repair specialists to new teachers. Many companies quickly realize that VR not only speeds onboarding of new employees but can be a big cost saver as well as you cut down on training-related travel and other expenses.

One of the key challenges for commercial VR rollouts to date is the simple fact that almost all of the hardware available today is distinctly consumer grade. I wrote early this year about HTC’s move to offer a more commercial-friendly product with the Vive Pro, which includes higher resolution, better ergonomics, and a will soon offer a wireless accessory to cut the tether. Beyond these types of updates, however, one of the things that commercial buyers want from their hardware is something that’s more robust, that can stand up to rough usage that would occur in a workplace. So when we asked respondents if they would be willing to pay more for commercial-grade hardware, a whopping 80% said yes, they would.

Biggest VR Roadblocks
While the survey data points to an interest in VR among many companies, clear use cases, and a willingness to pay, bringing the technology into the workplace will still face numerous roadblocks. When we asked respondents about the biggest roadblocks to VR adoption within their company, the top answers included a lack of clear use case, hardware cost, software cost, and services cost. So while many companies can see the obvious use cases for VR, many IT decision makers are clearly having some difficulty articulating these use cases within their company. Also, while many express a strong interest in paying more for commercial-grade hardware, the cost of hardware, software, and services is still a major blocker within many companies.

It’s early days for VR in commercial, and many of these roadblocks will disappear as the technology improves, use cases crystalize, pricing comes down, and the clear return on investment when using VR comes into view. In the meantime, the industry needs to move to embrace the growing demand for commercial VR, making it easier for companies ready to take the next step.

News You might have missed: Week of April 13, 2018

Apple cuts HomePod Orders

Bloomberg reported this week that according to some Apple store workers HomePod inventory is piling in. It also mentioned that by late March, Apple had cut some orders with Inventec Corp, one of the manufacturers that build the HomePod.

Via Bloomberg

  • I always find it hard to draw conclusions from mentions of cut orders, especially when more than one manufacturer is involved. If I am not mistaken, Inventec has a very small volume of HomePods in the first place, compared to Foxconn, so a cut of orders on their volume might not amount to much.
  • Another source mentioned by Bloomberg as a proof that sales are slow are recently released numbers from Slice Intelligence. I looked into Slice Intelligence methodology back in 2015 when they published some Apple Watch numbers and when looking at their numbers one has to remember a few things:
    • The information is based on e-receipts that consumers opt-in to share through an iOS app. It seemed that their panel receipts mostly come from online discount websites and big-box retailers. Which would lead to believe that a more mainstream base is representative of the panel
    • I don’t believe Slice has visibility of what is sold through Amazon, which in this case does not matter for HomePod but of course matters for overall market size and share as it would exclude Echo devices.
  • I am not questioning the methodology. I am simply pointing out that the numbers might not be providing the full picture.
  • All that said I would like to remind people of the fact that Apple’s core addressable market for HomePod are Apple Music users which as of this week have reached 40 million paying subscribers. Of those, I think it would be fair to assume a good number of those users would already have a speaker or some kind of sound system in their home. Others might primarily listen to Apple Music on the go rather than in their home which would not make HomePod a priority buy. Lastly, some might find the price high, especially when not in a position to properly test the superior sound experience.
  • Hard to tell of course when the channel of a product is mostly Apple, so numbers are very hard to come by. If I had to come up with a volume I would expect the HomePod to sell less than AirPods.

Spotify and Hulu offer Subscription Bundle

The companies said Wednesday that a $12.99 per-month plan will get you access to Spotify’s ad-free music streaming service and Hulu’s basic package that allows you to stream TV shows and movies with some ad breaks.

Via CNN 

  • Currently subscribing to both services individually would set you back $18 with the bundle providing you a saving of $5. Also, if you are already a Spotify customer you could get Hulu for just an extra dollar for the first three months
  • It seems to me that this is a case a weaker player (Hulu) joining forces with a stronger brand (Spotify) in order to gain more traction. Hulu has been growing its original content to try and catch up to Netflix. Both Hulu and Netflix also have Amazon to contend with now, of course, Spotify remains the market leader in music but with Apple Music numbers growing steadily some differentiated value-add is not a bad thing.
  • I also wonder if Spotify is making a move early, with the expectation that Apple will be soon adding more video content to its Apple Music service.
  • It will be interesting looking at Spotify and Hulu’s numbers in the future to see who is taking most of the hit on the discounted rates. Billing will be done by Spotify which might signal that Hulu might be the one taking the bigger hit on the revenue side.
  • I am not entirely sure if this is the start of more bundles being offered but I do expect brands to come together more often to fight a common enemy. 

Xiaomi might be interested in GoPro

Chief Executive Officer Nick Woodman has said he is open to a deal and earlier this year the company hired investment bank JPMorgan Chase & Co. to advise it on a potential sale. According to The Information, Xiaomi might offer up to $1 billion but does not want to overpay.

Via Bloomberg 

  • This would be a great move for both GoPro and Xiaomi as it looks like a great fit.
  • As mentioned in the article, GoPro has been looking for a buyer for a while and this was driven by a continued lackluster performance. What were revolutionary products at launch have not really evolved much in recent years both in terms of design and capabilities.
  • At the same time, smarthphone camera improvements coupled with water resistant capabilities have meant that for non-hard-core daredevils who might have wanted to take action shots during their holidays a smartphone might just be enough now. This further limited the addressable market for GoPro who mostly relies on the US for the bulk of its sales.
  • In 2016, GoPro tried to add to its portfolio by entering the drone market but that did not last long as it seemed that GoPro was unable to keep up with growing competition from new startups as well as established names like DJI.
  • What makes Xiaomi a particularly good fit is that it would offer the content shot with GoPro cameras, either as standalone or integrated in a phone, a much easier way to be streamed thanks to the strong ecosystem of both apps and connected devices Xiaomi has in China and other markets.
  • Xiaomi should also be able to leverage the GoPro brand especially in markets like the US where the Chinese brand has yet to make a strong move with its smartphones. Leaving the GoPro team in California might also help Xiaomi in future market moves.
  • The distribution channel GoPro has will also be very valuable to Xiaomi with a presence in over 100 countries.
  • Of course, the deal might get the attention of regulators given the current trade environment with China.

Intel pushes FPGAs for mainstream enterprise acceleration

Though NVIDIA gets most of the attention for accelerating the movement to more advanced processing technologies with its massive drive of GPU hardware into servers for all kinds of general compute purposes, and rightfully so, Intel has a couple of pots in the fire as well. While we are still waiting to see what Raja Koduri and the graphics team can do on the GPU side itself, Intel has another angle to improve efficiency and performance in the data center.

Intel’s re-entry to the world of accelerators comes on the heels of a failed attempt at bridging the gap with a twist on its x86 architecture design, initially called Larrabee. Intel first announced and showed this technology that combined dozens of small x86 cores in a single chip during an IDF under the pretense of it a discrete graphics solution. That well dried up quickly though as the engineers realized it couldn’t keep up with the likes of NVIDIA and AMD in graphics rendering. Larrabee eventually became a discrete co-processor called Knights Landing, shipping in 2015 but killed off in 2017 due to lack of customer demand.

Also in 2015 Intel purchased Altera for just over $16 billion, one of the largest makers of FPGAs (field programmable gate arrays). These chips are unique in that they can be reprogrammed and adjusted as workloads and algorithms shift, allowing enterprises to have an equivalent to custom architecture processors on hand as they need them. Xilinx is the other major player in this field, and now that Intel has gobbled up Altera, must face down the blue-chip-giant in a new battle.

Intel’s purchase decision made a lot of sense, even at the time, but it’s showing the fruits of that labor now. As NVIDIA has proven, more and more workloads are being shifted from general compute processors like the Xeon family and are being moved to efficient and powerful secondary compute models. The GPU is the most obvious solution today, but FPGAs are another; and one that is growing substantially in the move to machine learning and artificial intelligence.

Though initially shipping as a combination Xeon processor and FPGA die on a single package, Intel is now offering to customers Programmable Acceleration Cards (PACs) that feature the Intel Arria 10 GX FPGA as an add-in option for servers. These are half-height, half-length PCI Express add-in cards that feature a PCIe 3.0 x8 interface, 8GB of DDR4 memory, and 128MB of flash for storage. They operate inside a 60 watt envelope, well below the Xeon CPUs and NVIDIA GPUs they are looking to supplant.

Intel has spent a lot of time and money developing the necessary software stack for this platform as well, called the Acceleration Stack for Intel Xeon Scalable processors with FPGAs. It provides acceleration libraries, frameworks, SDKs, and the Open Programmable Acceleration Engine (OPAE), all of which attempts to lower the barrier of entry for developers to bring work to the FPGA field. One of Intel’s biggest strengths over the last 30 years has been its focus on developers and enabling them to code and produce on its hardware effectively – I have little doubt Intel will be class-leading for its Altera line.

Adoption of the accelerators should pick up with the news that Dell EMC and Fujitsu are selling servers that integrate the FPGAs for the mainstream market. Gaining traction with the top-tier OEMs like Dell EMC means awareness of the technology will increase quickly and adoption, if the Intel software tools do their job, should be spike. The Dell PowerEdge R740 and R740XD will be able to support up to four FPGAs while the R640 will support a single add-in card.

Though specific performance claims are light mainly due to the specific nature of each FPGA implementation and the customer that is using and coding for it, Intel has stated that tests with the Arria 10 GX FPGA can see a 2x improvement in options trading performance, 3x better storage compression, and 20x faster real-time data analytics. One software partner, Levyx, that provides high-performance data processing software for big data, built an FPGA-powered system that achieved “an eight-fold improvement in algorithm execution and twice the speed in options calculation compared to traditional Spark implementations.”

These are incredible numbers, though Intel has a long way to go before adoption of this and future FPGA technologies can rival what NVIDIA has done for the data center. There is large opportunity in the areas of AI, genomics, security, and more. Intel hasn’t demonstrated a sterling record with new market infiltration in recent years but thanks to the experience and expertise that the Altera team brings with that 2015 acquisition, Intel appears to be on the right track to give Xilinx a run for its money.

The Consumer Right to Privacy vs. Our Right to Not be Tracked

I have officially watched C-SPAN 3 the last few days more than at any other point in my life combined. The subject, Mark Zuckerberg taking questions from the United States Senate. In the vast majority of industry or executive presentations, I give I always start with a point about the times we live in being unprecedented when it comes to technology industry history. It seems the examples I have to support this point are many, but the current situation Facebook finds itself in is no exception. Facebook, every nation, and every consumer now face a philosophical fork in the road. The question is less to a consumer’s right to privacy; it seems no one disagrees that is a right, the question is, do we have the right not to be tracked?

Unprecedented Amounts of Consumer Data
There is a reason Facebook and Google have become the advertising juggernauts they are today. They have collected mountains user behavior data, mostly behind the scenes by tracking those consumers behavior in depth when they are on their service and even when they aren’t using their services. In fact, I would venture a highly educated guess that if consumers had any idea how much they were being tracked online by Facebook, and Google to a degree, they would be shocked and probably angry. In a recent study our team at Creative Strategies, Inc conducted on Facebook (which Carolina Milanesi dove into yestereday), we asked US consumers a range of sentiment questions. To this point of Facebook’s ability to track us, 53% of respondents check the answer option “I am not entirely comfortable with how good Facebook has become at tracking my online activity.”

Now, as consumers who engage in a free service with the understanding, we will see ads, I think the high-level philosophical point is how much data should the free service be allowed to gather on its users? Facebook would argue they need/want as much as necessary to provide a better advertising experience. I’ll acknowledge his point, and agree that if we have to suffer through ads, they may as well be relevant. So I suppose the bigger question is how much data is necessary for Facebook to provide me a relevant ad? The reality is it is probably a lot less than Facebook is gathering on its users today.

Here we have some precedent set by the EU with the GDPR. At a high-level, the GDPR is an effort to limit the amount of data collected on consumers and not make opting in to “more than the absolute bare minimum” of information to still receive the free service. The language focuses on simply saying “data that is absolutely necessary to run the service.” This is a phrase in question I think it is worth exploring in this situation. Because it seems that it can be interpreted many different ways.

For example, could Facebook or Google say that knowing my location at any given time is necessary to run the service? Could Facebook say that knowing my browsing history is necessary to run the service? It seems the logical answer to this is certainly no but only when we define Facebook or Google’s services narrowly. Google could say, well to effectively run a search and give a consumer the desired and relevant response we need to know the user’s location, browsing history, places they have been, things they like, etc. Similarly, Facebook could say we need to know things you talk about in private messages, all your likes and dislikes (whether you clicked a like button or not), and a host of other data point collected behind the scenes to effectively run the Facebook service.

The main takeaway is that it is true that both Facebook and Google will be a better service, and make that service better over time, the more data they have on you. This truth conveniently lines up with their business model because while all of that data collected does make their service better for the end user, it also makes it better for the advertiser. Both Facebook and Google have collected significantly more data by “snooping” on its users to give advertisers the ability to hyper-target specific customers. The big question in my mind is should this be allowed in the first place?

Opting In
What I think is necessary, is a more clear understanding of the behind the scenes, non-public actions, of what Facebook collects. Then Facebook and Google for that matter ask the consumer if it is Ok they track their location, read their emails and messages, observe their browsing history, etc., and give consumers the ability to opt-in or opt-out. The can freely explain that by opting into these things the services and ads will be more relevant and interesting, but the choice should be given to the customer to know how much data they are giving to Facebook and Google and decide if that is something they want to do. Today, there is little transparency over the behind the scenes, non-public behavior tracking of Facebook and Google’s users and that is the biggest change I think needs to come.

The basic consumer right to privacy, which everyone agrees with, needs to include the right not to be tracked beyond what the consumer consciously agrees to. Now the public behavior, the one Mark Zuckerberg kept using as an example in his questioning with the Senate and Congress, was that Facebook collects information on things you have liked, disliked, photos you post, etc. Things were done that are consciously public on the service seem like fair game. If I like a brand or product page, or post, or comment, I consciously know this is done in public for all to see. It’s perfectly acceptable for Facebook to use that information to get to know me better. This is why every time Mark Zuckerberg was asked about the amount of data collected by Facebook that was not done in public he dodged the question and just said “I will check with my team and get back to you.”

I don’t believe regulating Facebook is the answer. If anything, I’d rather see an act like something along the California Privacy Act, or something like it, get passed that forces companies to be more transparent about what data they collect and provide options to opt-out of those collection efforts. In general, we may need regulation to enforce simply how much data can be collected on consumers in the first place. The advertising industry thrived for decades without the ability to hyper-target consumers like Facebook and Google offer. A basic consumer profile has worked for decades, and it can work going forward.

The California Privacy act may be a good start, but it may need to go farther. We may need the government to put laws in place that shelter parts of consumers lives and online behavior and simply say these things are off limits for companies like Facebook and Google to collect data. I appreciate the tagline of the California Privacy Act which says “your life is not their business.” This is exactly right, and I would add that I’m ok with my interests being their business but not the minutia of my life and online activity. We have a long way to go, and as I said, we are in unprecedented times.

Are Self-driving Cars Targets for Advertisers?

One of the more inevitable technologies that will impact our lives in the next 5-10 years will be autonomous vehicles. And one of the ultimate virtues of these cars will be that you as a passenger will not have to actually drive the vehicle but could instead, lean back and read, watch a video, etc.

But as this chart below points out, any person in a self-driving car will probably become a major target for location-based advertising.
The key question this survey asked was “Imagine that your car could suggest things to you as you are driving around town, based on the places you are passing along your route. How useful would you find the following?”

Those who took the survey, some 2000 people, knew the car would be a self-driving one and their responses to this question is in this chart.

Given the question’s parameters, I should not have been surprised at these answers. We have some of this location-based advertising in our in-car GPS systems now that point out gas stations, restaurants, and even some services today.

But until I had a call with a top advertising executive in Washington, DC last week, I did not realize that the advertising world is salivating over this idea and are gearing up to innovate and potentially bombard us with ads as we take our seats in vehicles that we no longer have to drive.

When I first looked at this chart, my initial response is no way. I do not want to be targeted as I drive around. But my bias was based on the way I drive today. Since I am at the wheel and don’t want to have any distractions while I drive, anyone pushing an ad to me is an onerous idea. But if I am not at the wheel and then this type of advertising makes sense.

However, the idea of a car being a new “vehicle” for advertisers is important. Conservative estimates suggest by 2025 we could have at least 5 million self-driving cars on the road and by 2030 we could see well over 20+ million. Many of these self-driving vehicles in the first stages will be in fleets and will work more like driverless on-call taxis. If you have been in a cab in NYC or other major cities you know that taxies now have video-based adds in these cabs and some cases they are tied to dedicated video programs that tell about what you can do in that city or give suggestions on places to eat, stay or tourist attractions to visit.

The advertising distribution companies are already vying to become the ones to supply special video monitors or even interactive terminals in these fleet cars so people can see ads and, in some cases, even buy tickets to tourist attractions that are printed out for them in the car as they drive to that attraction. For advertisers, the ability to use fleet vehicles for ads is a relatively easy proposition.

But the bigger question they are working on is ” How do we bring location-based advertising to people who buy their self-driving cars in the future?” In Fleet cars, they can do deals to add these video screens and terminals. But for private vehicles, this is a more complicated issue. Do they cut deals with the car makers? Do they provide video screens or terminals free to private owners of self-driving vehicles? Do they hope that users will be looking at their smartphones or tablets and use them to share ads while they are chauffeured here or there on any given trip?

Advertisers see gold when it comes to providing ads for the automated driving crowd in the future but have some difficult roads to navigate before they ever reach their goals. The good news for them is that there are 5-10 years to get it right, but if you know the history of advertising, get it right they will even if it takes time.

US Consumers Want More Transparency from Facebook

Mark Zuckerberg went on record to say that thus far the #DeleteFacebook meme did not have much impact. We, at Creative Strategies, wanted to see if that was the case and more importantly we wanted to understand more about how the general public felt after the Cambridge Analytica incident. We ran a study across 1000 Americans who are representative of the US population in gender and age.

It would seem impossible for people to have missed the Cambridge Analytica incident, given the extensive press coverage. But, we wanted to make sure people outside the tech bubble were aware of it, so we asked: 39% said to be very aware, and another 37% said to be somewhat aware of what happened. Awareness among men was higher with 48% saying they were very aware compared to 29% among women.

There is no Trust without Transparency

Once we established awareness, we wanted to understand what would take to gain users’ trust back if their trust was indeed impacted. What we found was quite interesting. First, 28% of the people we interviewed never trusted Facebook to begin with. This number grows to 35% among men. When it comes to gaining trust back, it seems the answer rests on understanding and power. More precisely, gaining a better understanding of what data is shared (41%) and exercising the power to decide whether or not we are ok with sharing such data (40%). One of the answer options we gave was about making it easier to manage the amount of personal information share but this was not as much of an ask for the panel with only 33% selecting it. It seems to me that what users are asking for is more transparency rather than more tools to manage their settings, which makes a lot of sense.

How can I manage my information if I don’t even understand what and how it is used? This was a point that several senators made during Mark Zuckerberg’s hearing highlighting how long the Facebook terms of service document is. Zuckerberg’s response was that things are not as complicated as they seem, Facebook users know that what they share can be used by Facebook. Unfortunately, it is not as simple as that as the ramifications of how the data users share is used is quite complicated and even if you understand the Facebook business model, you would be hard pushed to know how far your data goes.

Better management of toxic content is also an action point that would help with trust: 39%. Not surprisingly, this is a hot button for women (49%) more so than it is for men (31%). I say not surprisingly not because I experience it first-hand but because over the years, there have been several studies that have shown a higher number of harassment cases for women online. The Rad Campaign Online Harassment Survey in 2014 found that women are more likely to use social media than men. Sixty-two percent of people who reported harassment experienced it on Facebook, 24% Twitter, 20% via email and 18% YouTube. The Halt Abuse Cumulative 2000-2011 analyzed 11 years of online harassment and found that women made up 72% of victims and men 47.5% of perpetrators.

Only 15% of our panelists said there is nothing Facebook can do to regain trust as they are just ready to move on to something else. Of course, if this sentiment were to be similar across other countries 15% of 2 billion users is a sizable chunk of the installed base that would disappear. What is interesting is that the number grows to 18% among people who said to be very aware of the Cambridge Analytica incident. Our study ran before the new details on the number of people impacted by the Cambridge Analytica data breach was released and before AggregateIQ and CubeYou breaches were revealed. It would be fair to assume that this initial negative sentiment might indeed grow.

Lower Engagement is the real Risk for Facebook

Privacy matters to our panelists. Thirty-six percent said they are very concerned about it and another 41% saying they are somewhat concerned.

Their behavior on Facebook has somewhat changed due to their privacy concerns. Seventeen percent deleted their Facebook app from their phone, 11% deleted from other devices, and 9% deleted their account altogether. These numbers might not worry Facebook too much, but there are less drastic steps users are taking that should be worrying as they directly impact Facebook’s business model.

Most panelists (39%) say to be more careful not just with what they post but also what they like and react to brands and friends posts. Thirty-five percent said to be using it less than they used to and another 31% changed their settings. Twenty-one percent said they are planning to use Facebook much less in the future. Others, in the free format comments, pointed out that they will take a more voyeuristic stance, going on Facebook to look at what people post but not engage. This should be the real concern for Facebook, as unengaged users will prove less valuable to brands who are paying for Facebook’s services.

Connecting People

After reading through the data on privacy concerns and plans to lower engagement one wonders why people are on Facebook in the first place. Here is where Zuckerberg’s explanation of what he created rings true to users: connecting people. Fifty-three percent of our panelists are on Facebook to keep in touch with friends and loved ones who don’t live in the area. Forty-eight percent said they are on Facebook to keep up with friends they lost touch with. Messenger and Groups are the other two drivers to the platform attracting 19% and 16% of the panelists. For those panelists who are very concerned about privacy the opportunity to keep in touch with people is even a stronger driver and seems to be enough to make using Facebook worthwhile.

Twenty percent of the panel said they are on Facebook because they are bored. This datapoint deserves a whole separate discussion in my view on the role that social media play as the digital gossip magazine or the real-life soap opera channel.

Facebook was built to connect people Zuckerberg kept repeating to senators in Washington and 40% of our panelists who have been on the platform for more than seven years wish Facebook could go back to be how it was. Alas, I doubt that is an option for Zuckerberg who did say though a paid version of Facebook might be an option. When we asked our panelists if they would be interested in paying for a Facebook version without advertising and with stricter guarantees of privacy protection 59% said no.

Implementing changes to the platform so that privacy could be better protected is not trivial when it impacts the core business model. Some of the discussion in Washington was pointing to the monopoly Facebook has which could be the biggest factor in determining how forgiving users will be. What is clear, however, is that the size Facebook has reached makes this a global issue, not just a US issue.

The New Security Reality

On the eve of next week’s RSA conference, it’s worthwhile to take a step back to reconsider what security means in today’s tech world. While the show has traditionally concentrated on cybersecurity threats, in an age of YouTube campus shootings, autonomous automobile-related deaths, and nation-state-driven, influence-peddling social media campaigns, the conversations at this year’s show are likely to be much more wide-ranging.

Admittedly, it’s not realistic to think that all the major issues driving these new kinds of threats can be addressed in a single conference. Nevertheless, the reality of these threats dramatically highlights both the depth of influence that technology now has regarding all forms of security, and just how far the tech world has reached into more traditional physical and political notions of security. Tech-related security issues now affect everything and everyone in some way or other.

In light of this new perspective, it’s also important to rethink how tech-related security challenges get addressed. While individual company efforts will clearly continue to be important, it’s also clear now that the only way to effectively tackle these kinds of big issues is through cooperation among many players.

In the past, many companies have been reluctant to share the security issues impacting them for fear of being seen as naïve or unprepared. With large scale brand trust concerns at stake, as well as the egos and reputations of many proud security professionals, perhaps it wasn’t surprising to see these kinds of reactions.

Today, however, the simple fact is that every company of any size is getting digitally attacked on a daily basis in some form or another, and a huge percentage of companies have had at least some type of security compromise impact them—whether they’ve admitted it or not.

Given this troubling, but realistic, landscape, it’s time for companies to more aggressively seek to partner with others to address the enormous tech-related security challenges we all face. In some cases, that might be via sharing critical, or even potentially sensitive, data to ensure that others can learn from the challenges that have already occurred. For example, companies involved in testing autonomous cars ought to be sharing their results with others in the industry, instead of hording them and treating these results as a proprietary resource. For other situations, cooperation might take the form of a more open, willing, and proactive attitude towards sharing experiences and learning best practices from one another.

Regardless of the approach, it’s going to take some strength of corporate character and some new ways of thinking to effectively address these issues.

Interestingly, one of the better and more recent examples of this proactivity that I’ve witnessed is the effort that Intel made to contact and engage with some of its key competitors in the semiconductor space—AMD and ARM—when they learned about the Spectre and Meltdown bugs that plagued many modern CPUs.

In case you need a quick refresh, the Spectre and Meltdown issues essentially involve manipulating a characteristic of modern CPU design called speculative execution that’s been common in processors from these and many other companies for roughly two decades. As the story played out, Intel took the vast majority of the heat, despite the fact that many other large companies, including Apple and Google, had to deal with most of the same issues.

Part of the focus was (and still is) undoubtedly due to the fact that Intel is the largest semiconductor manufacturer in the world and traditionally known as the major CPU provider to many computing devices. But another reason is that Intel took the lead in publicizing the challenges and continually provided updates on remedies for them. In fact, the company helped coordinate one of the more impressive briefings I’ve been on in nearly 20 years as an analyst by pulling together Intel, AMD and ARM people on the same call to explain the news shortly before it was made public.

At the time, it was a bit shocking to have these competitors come together to discuss the issue, but in retrospect, I realize it was exactly the kind of effort that the tech industry is going to need moving forward to address the kind of big security issues we all will likely continue facing.

Instead of benefitting from taking a more proactive approach to these issues, Intel took a great deal of criticism in both the tech and general press, much of it unfairly from my perspective. The company has followed up with a series of commitments to security—including, notably a very public “security first” pledge from CEO Bryan Krzanich—and is using the challenges that the exploits created as a catalyst for building a full complement of better security solutions moving forward.

The process clearly isn’t an easy one, but given the harsh new security realities that we’re facing, it’s the kind of effort we’re going to need other tech companies to make as well.

Should Apple Create a Social Network?

Recently, Tim Cook gave multiple interviews on Apple’s commitment to protecting their privacy. This is part of their DNA that Steve Jobs instilled in Apple’s leadership since he came back in 1987.

Here are two key things Cook stated in his interviews on this subject:

On Apple’s recent emphasis on customer privacy

“We do think that people want us to help them keep their lives private. We see that privacy is a fundamental human right that people have. We are going to do everything that we can to help maintain that trust. …

Our view on this comes from a values point of view, not from a commercial interest point of view. Our values are that we do think that people have a right to privacy. And that our customers are not our products. We don’t collect a lot of your data and understand every detail of your life. That’s just not the business that we are in.”

On how customer purchasing history is used
“Let me be clear. If you buy something from the App Store, we do know what you bought from the App Store, obviously. We think customers are fine with that. Many customers want us to recommend an app. But what they don’t want to do, they don’t want your email to be read, and then to pick up on keywords in your email and then to use that information to then market you things on a different application that you’re using. …

If you’re in our News app, and you’re reading something, we don’t think that in the News app that we should know what you did with us on the Music app — not to trade information from app to app to app to app.”

As one who has covered Apple for decades, this privacy issue has been up front and center for Steve Jobs and Apple since Jobs returned in 1997.

Because of Apple’s business model, which focuses on products like the Mac, iPad, iPhone and Apple Watch, they are not reliant on ads to grow their business. This allows them to deliver a highly secure and private experience to those who buy and use their products and services.

Given the problems that Facebook is having and how it, Twitter and Google can only make money through ads, perhaps it is time for Apple to create their own secure private social network. Apple already has the backend infrastructure in place to support this and could charge a nominal monthly fee of perhaps $1.99 to $2.99 a month to cover adding additional back-end network infrastructure to support adding hundreds of millions of social networks users to their service/system.

I see Apple and potential users benefiting from an Apple-hosted secure private network in many ways:

  • First, a secure private social network from Apple would give anyone that uses it an ad-free, highly private social network that would allow them to interact with their friends unfettered by ads of any type. Without ads, Apple is not scraping their data, and people would be free to share things between each other without any fear of Apple or anyone else ever seeing anything posted other than the people a user allows to view their site through a friend confirmation.
  • Second, Apple already has over 1.2 billion customers around the world, and of those, 800 million have given Apple their credit card to use to pay for additional services. That is a very large base to tap into if Apple should decide to do a secure, ad-free social network of their own.
  • Third, this could entice many folks out side of the Apple eco system to join Apple’s social network just for the privacy alone and ditch Facebook completely.
    When asked what he would do if he were currently faced with the problems confronting Facebook CEO Mark Zuckerberg, Cook said: “I wouldn’t be in this situation.”

Tim Cook believes that an ad-free social network is the only way one could provide a truly secure social network. Of course, Mark Zuckerberg disagreed with Cook and called his comments “glib”. Zuckerberg thinks he can create a secure social network that he can keep free supported by ads. The Jury is still out on that one, but Apple’s main business model is selling products. They could offer a very low cost secure private social network rather easily should they see this as a real opportunity to keep people in the Apple eco-system and entice others who are not in Apple’s services and product network to join.

Will Apple do this? I would not bet against it. I have to believe that at the very least Apple’s exec’s have been discussing this. They are the only one who could deliver a private, secure social network that does not need ads to support it. And Tim Cook’s comments show he understands that value of a social network that does not use ads to support it.

Podcast: Facebook, Intel CPUs, YouTube

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing the latest travails for Facebook, chatting about CPU battles for PCs and how the new Intel CPU announcements are affecting PC makers, and analyzing what the longer-term impact on Silicon Valley might be from the YouTube shooting incident.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Silicon Valley Giants Should Hold A Summit on Data Security and Privacy

Next week, Mark Zuckerberg will be trotted in front of Senate and House committees for what will surely be a high-profile admonishment of his company’s behavior. There’ll be a lot of bloviating by our elected representatives. Zuckerberg will be both prepared and contrite. But this issue is bigger than Facebook. Facebook might be the whipping boy du jour, but most of the Silicon Valley giants have exhibited some form of less than laudatory behavior at some recent point, between the unseemly uses of user data, security breaches, and lack of transparency.  That’s why Tim Cook’s vilification of Mark Zuckerberg earlier this week was actually a bit off-putting.

Notwithstanding next week’s expected spectacle, we should recognize that this is an important moment in the 20-year history of the broadband Internet. It is time for the Silicon Valley giants – who control a vastly outsized share of how users connect, communicate, and consume content – to come together and establish some rules of the road going forward. They need to show some leadership, and head off regulators from getting overly involved. Perhaps they should do this collectively, in some way. Why not a summit on data security and privacy?

Let’s first recognize that much as these companies compete, they are very much intertwined. Facebook, Apple, and Google, especially, would each be far less without each other. Rather than each of them taking this moment in time to address the issue of consumer data and privacy separately, it might make sense to do something collectively. There are three steps involved in this, in my view.

Step One is to establish some minimum level of what is appropriate to do with a customer’s data. We can call it a ‘code of conduct’, although the term is a bit vague and overused. In some cases, we have been criticizing these companies for crossing a line, when the line itself has not been particularly defined. Consumers know that there’s sort of an unofficial bargain here. Facebook, Google, Twitter might be free, but these companies profit from customers’ data and what they do online. Some subscribers might be OK with some of their data being made available to third parties – especially if they benefit from it in some way. But more direct communication and transparency is crucial here. We also need to recognize that as technology evolves and future opportunities arise, there will need to an ongoing dialog.

Step Two involves doing a much better job of communicating to customers exactly what is done with their data, and what sorts of controls they have over that. As an example, perhaps all Facebook users should be required to take an online tutorial that shows both how their data is or could be used, and what the various tools and settings are available to help them manage that.  It is also incumbent on us to become better educated about what is being done with our data, and what we can do to exert some control over that. Perhaps the leading companies should get together and develop the data equivalent of ‘drivers ed’, as a requisite for using these [free] services.

Step Three is developing better consistency across platforms that enable consumers to manage privacy and what is done with their data. Right now, the experience of managing app settings, from privacy to notifications, is quite different between iOS and Android devices, and fragmented still further within the Android ecosystem. It’s a whole other ballgame on PCs, and across the different OSs, browsers, and the like. Within leading apps, such as Facebook, Twitter, Gmail, Outlook, and so on, it would be great if there was some easy to find and easy to use ‘hub’ that would function as both a source of information, and present a consistent approach to managing settings. Too often, this stuff is in obscure places and the configuration tools somewhat obtuse.

At a minimum, this ‘summit’ would include Apple, Google, Facebook, Twitter, Microsoft, and possibly Amazon. It might also make sense to have AT&T, Verizon, Comcast, and possibly Netflix and Intel at the table. This group collectively represents an astounding 70% of the collective PC/mobile OS, digital advertising, online commerce, mobile/broadband, and pay TV markets.

We’ve loved reckoning these first 20 years of the Broadband Internet as sort of a ‘Wild West’. But now that much of the land has been grabbed, it’s time to bring some order to these parts. I think it’s a cop-out for Messrs. Cook and Zuckerberg to say that maybe there should be some ‘regulation’. I’m not sure that regulators know enough about this stuff to get it right, and let’s face it, our Congressional leaders aren’t exactly high on the customer trust/satisfaction list themselves these days. I’d love to see the extremely smart and capable leaders of Silicon Valley have a collective deep think about how they can rebuild a relationship with their customers that has become a tad fractured of late.

NVIDIA GTC proves investment in developers can pay dividends

Last week NVIDIA hosted its annual GPU Technology Conference in San Jose and I attended the event to learn about what technologies and innovations the company was planning for 2018 and beyond. NVIDIA outlined its advancements in many of its growing markets including artificial intelligence, machine learning, and autonomous driving. We even saw new announcements around NVIDIA-powered robotics platforms and development capabilities. Though we were missing information on new GPU architectures or products aimed at the gaming community, there was a lot of news to take in from the show.

I came away from the week impressed with the execution of NVIDIA and its engineering teams as well as the executive leadership that I got to speak with. CEO Jensen Huang was as energetic and lively as I have ever seen him on stage and he maintained that during analyst and media briefings, mixing humor, excitement, and pride in equal doses. It is one of the areas of impact that a show like GTC can have that doesn’t make it to headlines or press releases but for the audience that the show caters to, it’s critical.

The NVIDIA GTC site definitely states it goal upfront.

GTC remains one of the last standing developer focused conferences from a major technology hardware company. Though NVIDIA will tell you (as it did me) that it considers itself as much a software company as chip company, the fact is that NVIDIA has the ability to leverage its software expertise because of the technological advantages its current hardware lineup provides. While events like Facebook F8, Cisco DevNet, and Microsoft BUILD continue to be showcases for those organizations, hardware developer conferences have dwindled. Intel no longer holds its Intel Developer Forum, AMD has had no developer focused show for several years, and giants like Qualcomm and Broadcom are lacking as well.

GTC is has grown into a significant force of change for NVIDIA. Over the 10+ years of its existence, attendance has increased by more than 10x from initial numbers. The 2018 iteration included more than 8,500 attendees that included developers, researchers, startups, high-level executives from numerous companies, and a healthy dose of media.

NVIDIA utilizes GTC to reach the audience of people that are truly developing the future. Software developers are a crucial piece and the ability to instill them with information about tool sets, SDKs, and best practices turns into better applications and more usage models applied to GPU technology. The educational segment is impressive to see in person, even after many years of attendance. I find myself wandering through the rows and rows of poster boards describing projects that include everything from medical diagnosis advancements to better utilization of memory for ray tracing, all of course built on GPU processing. It’s a reminder that there are real problems to solve and that much of the work is still done by these small groups of students, not by billion-dollar companies.

Of course, there is a benefit to NVIDIA. The more familiar these developers and researchers are with the technology and tools it provides, both in hardware and software, the better the long-term future for NVIDIA in the space. Technology leaders know that leading in technology itself is only part of the equation. You need to convince the right people that your better product is indeed better and provide the proof to back it up. Getting traction with development groups and fostering them with guides and information during the early stages of technological shifts is what helped created CUDA and cement it as the GPU compute language of choice for the better part of a decade.

NVIDIA wants the same to occur for machine learning and AI.

The GPU Technology Conference is the public facing outreach program that NVIDIA spends a tremendous amount of money hosting. The beginnings of the show were bare and had equal parts gaming and compute, but the growth and redirection to focus on it as a professional development event prove that it has paid dividends for the company. Just look at the dominance that NVIDIA has in the AI and ML spaces that it was previously a non-contender in; that is owed at least in part to the emphasis and money pumped into an event that produces great PR and great ideas.

As for other developer events, the cupboard is getting bare for hardware companies. Intel cancelled the Intel Developer Forum a couple of years back. In hindsight, this looks like an act of hubris, that Intel believed it was big and important enough that it no longer needed to covet developers and convince them to use its tech.

Now that Intel is attempting to regain a leadership position in these growing markets that companies like NVIDIA and Google have staked ground in, such as autonomous driving, artificial intelligence, and 5G, the company would absolutely benefit from a return of IDF. Whether or not the leadership at Intel recognizes the value that the event holds to developers (and media/analyst groups) has yet to be seen. And more importantly, does that leadership understand the value it can and should provide to Intel’s growing product groups?

There are times when companies spend money on events and marketing for frivolous and unnecessary reasons. But proving to the market (both of developers and Wall Street) that you are serious about a technology space is not one of them. NVIDIA GTC proves that you can accomplish a lot of good with this and I think the success that it has seen in areas of machine learning prove its value. What started out as an event that many thought NVIDIA created as hubris has turned into one the best outward signs of being able to predict and create the future.

My Five Simple Rules to Survive Social Media

So much happened over the past couple of weeks that led many to reassess their engagement on social media. The Facebook and Cambridge Analytica debacle might have been the straw that broke the camel’s back, but social media has been under scrutiny for quite some time. From Twitter’s problem with hate speech to Snapchat’s dubious advertizing, to Facebook, we have been reminded daily that social media is not a heavenly world.

Despite all that has been going on, however, I am not ready to pack everything up and walk away just yet. I have, though, reassessed how I use social media and realized that the following five simple rules might help me remain sane.

Invest time to figure out how things work

I am a strong advocate of taking matters into my own hands. When it comes to social media, you should not be expecting to have the platform you are using be explicit about what data they collect and who they share it with.

I believe that often in real life, and too often in our digital life, we give away personal information without thinking about who will eventually get their hands on it. If you are a parent and you have taken your kid to one of those jump/climb/play birthday parties you know that the liability form they ask you to fill in is just a way to have your address for advertising. I have learned from a friend to say no to any additional information other than my signature.  Why are we not that picky when it comes to social networks?

Of course, most sites don’t make it easy for you to find the information and when you do understanding all the ramifications requires a legal degree and the patience to read through it all. Many people don’t realize how Facebook works. They might know it is advertizing, and they might know they are using some personal information but they do not know how deep the link between these two is.

After the Cambridge Analytica news, I went into my Facebook settings and revoked access to all those apps I no longer use, or I could not even remember granting access to them in the first place. I stopped logging in with Facebook or Google to any new service, app or website, and instead, I create an account. Yes, it is painful, but in most cases, it is only a couple of minutes more. I am particularly careful with allowing access to my friends’ information. I might be ok with sharing, but I cannot make that decision for my friends.

Whether or not this is all a lost cause because there is plenty of data about us out in the wild does not really matter. The point is that some tools are available to us and while we should demand more from the platform owners, we should also start using what is already there.

Do Your Due Diligence on News

A big part of social media has to do with news. I know I use Twitter to get breaking news and analysis from my favorite tech reporters and commentators. The immediacy of social media is such that you feel like you have your hand on the pulse. Yet, that immediacy does not give enough time to vet the information being shared. Even relying on official sources might not be enough to avoid mistakes. I am not talking about fake news, I am just talking about news hot off the press that is being reported as it develops. If you are the kind of person who wants to follow news as it unfolds, make sure you validate the information especially before sharing. Yesterday shooting at YouTube HQ was a sad example of how people also try to take advantage of the situation to spread hate and misinformation, as the Twitter account of a person who was caught up in the incident was hacked.

When it comes to news you also need to be aware of your bubble. It is inevitable that you follow or befriend people you like and share interests with. This might cause you to have a very one-sided set of information even more so than it used to be when you bought your favorite newspaper or watched your favorite evening news show. It is on you to balance your sources.

Be Considerate

Be considerate to people you interact with and yourself. I never say something on social media that I would not say in person, and this applies both when I talk about people or brands. I do clean up my language more than I do in person only because I don’t know who is on the other side of my tweet or post. This is not different from not swearing in a room full of children or when you don’t know how people feel about it.

My digital me is also kinder than my real me. When you blast out an opinion or thought you don’t know every person you are going to reach. The number of people who will engage with you is probably a fraction of the people that you reach and might be affected by what you say.

I might be kinder and have a cleaner vocabulary, but I am always myself. All right, maybe on Facebook, I am also the happier side of myself! The point is that I stay true to what I care about and what I believe. This is why at times I would veer from my tech coverage on Twitter to talk about women, diversity, and education.

Also, remember to be kind to yourself. Mute, unfollow, report people who create a toxic environment. Don’t think for a second that just because they do not say it straight to your face, it won’t eventually affect you. In the same way, as you would stop talking to someone in real life or stop going to a place where everyone is obnoxious you should walk away from social.

The Pros must outweigh the Cons

For me today, the pros of being active on social media still outweigh the cons.

From a personal perspective, it gives me a way to keep in touch friends I still have in the UK and Italy that I don’t see often. I could call or write but the reality is that time is limited, and as much as you care about people it is hard to share details or your everyday life over email or even a call.

From a professional standpoint, social media does make things easier. Think about changing job before social media and how much harder it was for people who had a public profile to keep their contacts. Sure people could find you, but it was certainly not as easy. Now people can find me on Facebook, Messenger, Twitter even on LinkedIn. If they cannot find you, it really causes they don’t want to!

Fasting is Good for the Soul

I am always surprised how much time I can spend scrolling through Facebook and Twitter. Twitter, in particular, is a window to a ton of information if you follow the right people.

When I am less present on social, I do feel I am less aware of the latest news but at the same time, there is less anxiety. I might get the news at the end of the day or the following day, but those stories are more developed without having been through that fast pace that only increases my apprehension. This has been especially true since the American Election and Brexit as pretty much everybody in my feed started talking more openly about politics.

As you read this article, I will be enjoying a few days of exploring Washington DC with my daughter. I will have my Apple Watch set up to show breaking news, and I will check in probably at the start and the end of my day. For the rest of the time, though, I will try my best to be in the moment, to make memories with my kid knowing that I will not be missed much on social media and that everybody and everything will still be there when I get back.

 

 

 

Arm Mac’s Cometh. How Mac Differentiation Will Deepen

A Bloomberg report that came out yesterday speculated that Arm based Mac’s could come as early as 2020. Given both Apple’s silicon roadmap and its hardware and software roadmap, that timing seems plausible. The bigger question is why Apple would make a change, and more importantly pour so much R&D developing a custom chip for Mac hardware when they still only sell roughly 20 million Macs a year. I think a few key reasons make the most sense.

Intel’s Lost of Moore’s Law
As much as Intel, and others in the semiconductor industry may try to continue to argue Moore’s Law is alive and well, it is not. At least not by the purist definition which is a 24-month cycle of both transistor count doubling, which assumes move to the next transistor node, AS WELL AS an economic factor that follows a downward cost curve of cost per transistor. Moore’s Law, to still be Moore’s Law has to lead to a more powerful/efficient processor that is also cheaper to make on a 24-month basis. Now, again, we can argue semantics here, but this is the dynamic Intel lived by for more than a decade.

The truth is, according to Intel’s schedule, Moore’s Law now a three-year timeline not a two-year timeline and the economic factors of this law are still debatable as we approach 7nm and 5nm processor technology, it is proving quite expensive to achieve. For Apple, Intel has been a somewhat frustrating partner as their innovations in Core series processors continues to delay leaving Apple at the subject to Intel’s timelines not their own. All the while, their Arm manufacturing partner in TSMC is already showing a path to 5nm process technology, and Apple could be making A-Series chips on this process technology in 2020.

By moving Macs to Arm, Apple will better control their innovation cycle for Mac hardware and not be beholden to Intel’s slowing timelines. In fact, what is an interesting train of thought, is Apple can technically take back Moore’s Law if they wanted from at least the performance/efficiency elements.

Deeper Differentiation
I have many friends deep in the semiconductor industry. There remains a huge question around x86 processors and if any true battery life gains will happen there. Whereas, while somewhat speculative but entirely plausible from a theoretical perspective, a custom designed Arm-based SoC for a Mac could reach north of 30 hours of battery life. Consider this point, Qualcomm sees roughly 22-24 hours of battery life on their Arm-based Windows machines, and this is being accomplished with a chip that was made for smartphones, not for PCs. Imagine the kind of performance and battery life that can be accomplished if an Arm-based processor was purpose-built for a laptop? Honestly, I think a rough target of 30 hours of battery is conservative. 40 hours could be achievable given the size of battery that can be put into a laptop, and how efficient 7nm and 5nm processes technology will be.

If true, then Apple will have industry leading battery life in Macs with the performance gains they will achieve by optimizing the software to be tuned to their custom silicon architecture. While it is true Windows vendors could move more to Qualcomm based designs to compete, the reality is Apple will still likely have an edge because of how tightly tuned the operating system can be to the silicon architecture. But a meta point remains in this logic. If Apple does do this, and it forces PC OEMs deeper down the Qualcomm route, then this becomes a very worrisome position for Intel’s PC business.

A Last Hypothetical
There is one last hypothetical I’d like to throw out there. I find it highly unlikely Apple would move entirely aware from x86 (not I did not say Intel). x86 still has performance benefits for heavier CPU based workloads. It is true that future software development techniques can make this a moot point, but for the sake of argument let’s assume it remains for now. In this scenario, Apple would still want an x86 based design for their high-end more professional focused Macs.

Given Apple wanting to control more of their customization for things like security, performance, efficiency, and tune software to those silicon design priorities, I can see Apple moving closer to AMD to do semi-custom designs. AMD has a great business where they let partner design their solutions using AMD’s x86 architecture design. Both Microsoft and Playstation did this for the latest XBOX and PlayStation consoles. These consoles run a custom designed piece of AMD x86 architecture that is unique to both consoles. I can see Apple doing a semi-custom design with AMD and thus bringing their own x86 based design to professional level Macs. This would allow Apple to design the x86 machines the way they want and prioritize things that Intel may not in their designs.

The big picture point is Apple is going down a different path than every other hardware maker and their priorities and needs when it comes to semiconductors are vastly different than everyone else’s. This is the main reason why it is inevitable Apple for Apple to make all the important silicon in every piece of hardware they make.

And lastly, this is a subject for another post, but I get the feeling we may be seeing the full disruption of Intel begin to play out.

Making AI Real

Back in the 1980s and ‘90s, General Electric (GE) ran a very successful ad campaign with the tagline “We Bring Good Things to Life.” Fast forward to today, and there’s a whole new range of companies, many with roots in semiconductors, whose range of technologies is now bringing several good tech ideas—including AI—to life.

Chief among them is Nvidia, a company that began life creating graphics chips for PCs but has evolved into a “systems” company that offers technology solutions across an increasingly broad range of industries. At the company’s GPU Technology Conference (GTC) last week, they demonstrated how GPUs are now powering efforts in autonomous cars, medical imaging, robotics, and most importantly, a subsegment of Artificial Intelligence called Deep Learning.

Of course, it seems like everybody in the tech industry is now talking about AI, but to Nvidia’s credit, they’re starting to make some of these applications real. Part of the reason for this is because the company has been at it for a long time. As luck would have it, some of the early, and obvious, applications for AI and deep learning centered around computer vision and other graphically-intensive applications which happened to be a good fit for Nvidia’s GPUs.

But’s it’s taken a lot more than luck to evolve the company’s efforts into the data center, cloud computing, big data analytics, edge computing, and other applications they’re enabling today. A focused long-term vision from CEO Jensen Huang, solid execution of that strategy, extensive R&D investments, and a big focus on software have all allowed Nvidia to reach a point where they are now driving the agenda for real-world AI applications in many different fields.

Those advancements were on full display at GTC, including some that, ironically, have applications in the company’s heritage of computer graphics. In fact, some of these developments finally brought to life a concept for which computer graphics geeks have been pining for decades: real-time ray tracing. The computationally-intensive technology behind ray tracing essentially traces rays of light that bounce off objects in a scene, enabling hyper-realistic computer-generated graphics, complete with detailed reflections and other visual cues that make an image look “real”. The company’s new RTX technology leverages a combination of their most advanced Volta GPUs, a new high-speed NVLink interconnect between GPUs, and an AI-powered software technology called OptiX that “denoises” images and allows very detailed ray-traced graphics to be created in real-time on high-powered workstations.

On top of this, Nvidia announced a number of partnerships with companies, applications, and open standards that have a strong presence in the datacenter for AI inferencing applications, including Google’s TensorFlow, Docker, Kubernetes and others. For several years, Nvidia has offered tools and capabilities that were well-suited to the initial training portion of building neural networks and other tools used in AI applications. At this year’s GTC, however, the company focused on the inferencing half of the equation, with announcements that ranged from a new version (4.0) of a software tool called TensorRT, to optimizations for the Kaldi speech recognition framework, to new partnerships with Microsoft for WindowsML, a machine learning platform for running pre-trained models designed to do inferencing in the latest version of Windows 10.

The TensorRT advancements are particularly important because that tool is intended to optimize the ability for data centers to run inferencing workloads, such as speech recognition for smart speakers and objection recognition in real-time video streams on GPU-equipped servers. These are the kinds of capabilities that real-world AI-powered devices have begun to offer, so improving their efficiency should have a big influence on their effectiveness for everyday consumers. Data center-driven inferencing is a very competitive market right now, however, because Intel and others have had some success here (such as Intel’s recent efforts with Microsoft to use FPGA chips to enable more contextual and intelligent Bing searches). Nevertheless, it’s a big enough market that’s there likely to be strong opportunities for Nvidia, Intel and other upcoming competitors.

For automotive, Nvidia launched its Drive Constellation virtual reality-based driving simulation package, which uses AI to both create realistic driving scenarios and then react to them on a separate machine running the company’s autonomous driving software. This “hardware-in-the-loop” based methodology is an important step for testing purposes. It allows these systems to both log significantly more miles in a safe, simulated fashion and to test more corner case or dangerous situations, which would be significantly more challenging or even impossible to test with real-world cars. Given the recent Uber and Tesla autonomous vehicle-related accidents, this simulated test scenario is likely to take on even more importance (and urgency).

Nvidia also announced an arrangement with Arm to license their Nvdia Deep Learning Accelerator (NVDLA) architecture into Arm’s AI-specific Trillium platform for machine learning. What this does is allows Nvidia’s inferencing capabilities to be integrated into what are expected to be billions of Arm core-based devices being built into IoT (Internet of Things) devices that live on the edge of computing networks. In effect, this allow the extension of AI inferencing to even more devices.

Finally, one of the more impressive new applications of AI that Nvidia showed at GTC actually ties it back with GE. Several months back, the healthcare division of GE announced a partnership with Nvidia to expand the use of AI in its medical devices business. While some of the details of that relationship remain unknown, at GTC, Nvidia did demonstrate how its Project Clara medical imaging supercomputer could use AI not only on newer, more capable medical imaging devices, but even with images made from older devices to improve the legibility, and therefore, medical value of things like MRIs, ultrasounds, and much more. Though no specifics were announced between the two companies, it’s not hard to imagine that Nvidia will soon be helping GE to, once again, bring good things to life.

The promise of artificial intelligence, machine learning and deep learning goes back decades, but it’s only in the last few years and even, really, the last few months that we’re starting to see it come to life. There’s still tremendous amounts of work to be done by companies like Nvidia and many others, but events like GTC help to demonstrate that the promise of AI is finally starting to become real.

Smart Home Competition Fuels Innovation and Creativity

Although still in its infancy, investment and engagement in artificial intelligence (AI) research continues to grow. A recent Consumer Technology Association (CTA) report citing International Data Corporation (IDC) estimates found global spending on AI was nearly 60 percent higher in 2017 than in 2016 and is projected to grow to $57 billion by 2021. And almost half of large U.S. companies plan to hire a chief AI officer in the next year to help incorporate AI solutions into operations.

As exciting as these changes are, however, one of the most exciting examples of AI right now hits a little closer to home – in fact for many of us, it’s in our living rooms.

Digital assistants are one of the hottest trends in AI, in large part thanks to the vast array of functions they offer consumers. These helpful, voice-activated devices can answer questions, stream music and manage your calendar. More, they turn off the lights, lock the doors and start your appliances when connected to compatible home systems. Budding support for digital assistants across the smart home ecosystem shifts the entire landscape of control from a web of apps to the simplicity of the human voice.

At CES® 2018, we saw many different digital assistants in action, from well-known players such as Google Assistant, Apple Siri and Amazon Alexa to other disruptive options such as Samsung’s Bixby, Microsoft’s Cortana and Baidu’s Raven H. Competition has spurred creativity and boosted innovation, as more and more products that connect with these virtual helpers emerge on the scene.

Competition in the smart speaker category, for example, has prompted greater differentiation among these devices as brands deploy unique features to attract consumers. The strategy is expected to pay off. CTA research projects U.S. smart speaker sales will increase by 60 percent in 2018 to more than 43.6 million units. Almost overnight, smart speakers powered by digital assistants have become the go-to smart home hub, a key component of the Internet of Things (IoT) and the catalyst driving smart home technology revenue growth of 34 percent to a predicted $4.5 billion this year.

The smart speaker category is also boosting other categories of smart home innovations. The rise of smart home technology – expected to reach 40.8 million units in the U.S. in 2018, according to CTA research – creates a new space for digital innovators to connect more devices, systems and appliances in more useful ways. This, in turn, is redefining the boundaries of the tech industry. Competition has fueled creativity, and creativity has expanded convenience – and Americans love it.

Fifteen years ago, we didn’t necessarily think of kitchen and bath manufacturers such as Kohler or Whirlpool as tech companies. Today, these companies are finding ways to integrate their products into the IoT, such as Whirlpool’s “Scan-to-Cook” oven and Kohler’s Verdera smart mirror. And Eureka Park™ – the area of the CES show floor dedicated to startups – hosted dozens of smart home innovators from around the world in January, launching their products for the first time to a global audience. Part of what’s so amazing about these technologies is they work together across platforms to create more efficient, more economical, more livable homes.

For example, South Carolina-based Heatworks developed a non-electric system for heating water, along with an app that lets system users control water temperature and shower length from their phones. New York-based Solo Technology Holdings has created the world’s first smart safe that sends you mobile alerts when it opens. Lancey Energy Storage, out of Grenoble, France, introduced the first smart electric heater, which saves more money and energy than traditional space heaters. And Israeli startup Lishtot showcased a keychain-sized device that tests drinking water for impurities and shares that data wirelessly via Bluetooth. These are just a few of the innovations made possible by IoT.

The IoT revolution has leveraged what I like to call the four C’s: connectivity, convenience, control and choice. Just as we experience the physical world with our five senses, we experience the digital world through the four C’s – they’ve become organic to our modern daily life, yet they are subtle enough that we often take them for granted. Consumers expect the four C’s to be ubiquitous. They are the default settings that anchor our digital experiences, which now increasingly includes our homes and our appliances.

The smart home phenomenon at CES represents what the tech industry does so well: companies big and small leading the IoT charge, crafting unique innovations that can be implemented across ecosystems. And everyone – from the largest multinational companies to the smallest, most streamlined startups – has an opportunity to redefine what it means to be at home.

It’s a redefinition that consumers embrace. Over the course of this year, I have no doubt that we’ll see the efficiencies and improvements technology delivers expanding beyond the home, into our workplaces and our schools. This remarkable evolution – driven by visionary innovation and fierce competition – is proof that technology is improving our lives for the better, saving us time and money, solving problems large and small and raising the standard of living for all.

Podcast: Apple Education, NVidia Tech Conference, Microsoft Reorg, Facebook Memo

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the news from Apple’s Education event, analyzing the NVidia GPU Technology Conference, chatting about the recent Microsoft reorganization, and debating the impact of the recent Facebook memo release.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Facebook’s Oculus Go Looks Great, But Will People Buy It?

I recently had the opportunity to test Facebook’s upcoming Oculus Go virtual reality headset. Announced last year, and due to ship later this year, the announcement made waves because Facebook plans to sell the standalone headset-which works without a PC or smartphone-for $200. My hands-on testing showed a remarkably polished device that yields a very immersive experience. But Facebook has yet to articulate just how it plans to market and sell Oculus Go, so its success is far from assured.

High-quality Optics and Sound
When Facebook first announced Oculus Go, and its price point, many presumed that the device would drive a VR experience more comparable to today’s screenless viewers (such as Samsung’s Gear VR) than a high-end tethered headset such as Facebook’s own Oculus Rift. While it’s true that the hardware that constitutes Oculus Go may not measure up to spec-for-spec to high-end rigs connected to top-shelf PCs, the device itself is a testament to what’s possible when the one vendor is producing a product that tightly integrates the hardware and the software. It’s clear that Facebook and hardware partner Xiaomi has done a masterful job of tuning the software to utilize the hardware’s capabilities.

I spent about 20 minutes in the headset and was amazed at how easy it was to wear, how great the optics looked, the high quality of the integrated audio, and the functionality of the hand-held controller. I have tested most the VR hardware out there, and this was among the most immersive experiences I’ve had in VR. That’s an incredible statement when you consider the cost of the hardware, and the fact it is inherently limited to three-degrees-of-freedom motion-tracking capabilities (high-end rigs offer six degrees).

Facebook has slowly been rolling out details about the hardware inside Oculus Go, including details about the next-generation lenses that significantly improve the screen-door effect that impacts most of today’s VR experiences. The company has also talked about some of the tricks it employs to drive a higher-quality optical experience while hammering the graphics subsystem less, leading to better battery life and less comfort-destroying heat.

One of my key takeaways from the demonstration was that with the Oculus Go, Facebook had created an immensely comfortable VR headset, and I can’t overstate the importance of that. Today, even the most die-hard VR fan must contend with the fact that if they’re using a screenless viewer such as the Oculus-powered Gear VR with a Samsung smartphone, they can only do it for short periods of time before the heat emanating from the smartphone makes you want to take it off. Heat is less of an issue with tethered headsets, but the discomfort of the tether weighing down the headset means there are limits to just how much time you can spend fully immersed in those rigs, too.

But Can They Sell It?
So the Oculus Go hardware is great, and the standalone form factor drives a unique and compelling virtual reality experience. But the question remains: How is Facebook going to market and sell this device, and is there enough virtual reality content out there to get mainstream customers to lay down $200?

To date, Facebook hasn’t said much publicly about the way it intends to push Oculus Go into the market, and through which channels. The company undoubtedly learned a great deal about channels with its successes (and failures) with the Oculus Rift. The bottom line is that, for the foreseeable future people really want to try out virtual reality before they buy it. Oculus Go should be significantly easier to demonstrate in store than a complicated headset tethered to a PC, but how will Facebook incentivize the channel? What apps will it run? Who will ensure that the devices are clean and operational?

When I talk to Oculus executives, their belief that virtual reality is an important and vital technology is immediately clear. Often it feels as if they see its ascension as a certainty and just a matter of time. But for the next few years, moving virtual reality from an early adopter technology to something the average consumer will want to use is going to take herculean marketing, education, and delivery efforts. With Oculus Go, Facebook has a key piece of the puzzle: a solid standalone device at a reasonable price. Now it needs to put into place the remaining pieces to ensure a successful launch.

News You might have missed: March 30th, 2018

Microsoft Reorg

On Thursday, Satya Nadella announced a Microsoft reorg that affected the company across the board but left the current Windows and Devices team without its leader Terry Myerson who is moving on to new opportunities outside of Microsoft.

The new groups are the following:

Experience and Devices – led by Rajesh Jha

Cloud and AI Platform – led by Scott Guthrie

AI and Research – led by Harry Shum

After 21 years, Terry Myerson is leaving Microsoft taking some time off to spend with the family and then pursue new opportunities.

 Via Microsoft  

  • The previous Windows and Devices team has been split across the two new organizations. In the Experience and Devices, you now have: Devices still led by Panos Panay; Windows led by Joe Belfiore; New Experiences and technology led by Kudo Tsunoda and Enterprise Mobility and Management lead by Brad Anderson
  • I really like that Microsoft is starting to talk about experience and product ethos that goes across hardware and software. I am also delighted that Panos Panay will be responsible to drive that because his focus on Surface has always been to get the technology out of the way to let users enjoy using the device and focus on their workflow. I believe that Panay and Belfiore will work well together to drive a rich experience across all devices including Surface, Xbox, and possible future MR headsets.
  • I believe that while Windows will continue to be instrumental to the next phase of Microsoft’s business it will no longer be the focus of the narrative. On the enterprise side, we recently saw the rise of Microsoft 365 which is precisely bringing all aspects of the Microsoft offering together to deliver an experience.
  • On the consumer side, however, the focus seems to remain on Windows mostly because so much of the overall revenue is coming from it. As Microsoft figures out how the business model must change going forward possibly shifting to core services and experiences rather than OS I expect the narrative to shift to a consumer version of Microsoft 365.
  • Despite the fact that Windows now powers way more than PCs most still think of computing when they think of Microsoft and Windows.
  • It will be interesting to see how much more flexibility Panay will now have with Surface and Windows when it comes to features and or tweaks, for lack of a better word, that will be available only on first-party devices.
  • I have two concerns when I look at the structure:
    • AI Perception and Mixed Reality under Kipman are not part of the devices and Experiences group which makes me worry that for Microsoft this will continue to be mostly an enterprise play going forward
    • Universal Store is also not part of the Experience and Devices team and that is troubling to me as a big part of the experience in today’s devices comes alive through the apps. The store is key to the success of Windows 10 S and not having it be part of the Experience and Devices team might not allow for a stronger prioritization.

Verizon might bring back the Palm Brand

Android Police said on Thursday that Verizon is rumored to be bringing back by the end of 2018 the Palm brand on an Android phone. TCL bought the rights to make phones under the Palm name back in 2015 and last August a TCL executive confirmed their plans to bring back the Palm brand in the second half of 2018.

Via Android Police

  • Nostalgia is in, there is no question about it! Nokia, Blackberry and now Palm. But things could be much easier this time around for TCL
  • TCL tried to bring back the Blackberry brand with the KEYONE but they overestimated both how much people still cared about the brand and how much people still wanted a keyboard.
  • With Palm, I think the situation is a bit different and might actually work quite well for TCL.
  • First, the love for the Palm brand is still strong in the US. Palm, more so than Blackberry, was always seen as the brand that brought smartphones to consumers.
  • Second the fact that Verizon, who was a big channel for Palm, has been looking to build what he had successfully done with the Droid. With Motorola more interested in keeping its options open rather than doing something exclusive with Verizon and Huawei not being an option, it seems that Verizon is looking for an alternative.
  • It will be interesting to see if Verizon will go for a pure Android play or it will try and customize the software in any way, especially around its services.
  • For TCL a successful Palm relaunch will mean revenue and potentially an indirect way for consumers to get to know the brand more as TCL rather than Alcatel which has been the stronger name in the phone business.
  • The only problem that all the companies betting on nostalgia are facing is that to the new generations names like Palm, Nokia, Blackberry do not mean much. This is where Verizon’s Marketing Dollars might come useful.

150 Million MyFitnessPal Accounts breached

Under Armour said on Thursday that 150 million MyFitnessPal diet and fitness app accounts were compromised in February. In what Reuters calls one of the biggest hacks in history stolen data included names, email addresses and scrambled password for the app. Under Armour said to be working with data security firms and law enforcement but did not provide any specific detail

Via Reuters 

  • This could have been much worse for UA as users entered their social security number, driver license number, and payment information in order to use the app but this data was not compromised.
  • MyFitnessPal is part of Under Armour connected fitness division and while less than 2% of the company sales it is a part of the business that is a core focus for future growth.
  • The big concern for users is, of course, the password name combination that the hackers could try. Very often consumers do not change their passwords across apps or website which in this case would open up the opportunity to the hackers to see much more than fitness information.
  • While the recent Facebook/Cambridge Analytica saga has brought privacy on everyone’s mind, I believe very few consumers have any concerns about where the data they enter to use an app will be stored.
  • In general, I think there is more concern about online shopping than there is with apps that offer a service rather than a product.
  • While the payment data, in this case, was not compromise, the incident might raise awareness for mobile payment options like Apple Pay and Android Pay and how the app owner does not see your payment information. Consumers might be looking more for apps and websites that support these options going forward so they do not have to share their data.
  • On the other hand. I believe brands have a higher responsibility as more and more data is collected from users to reassure them on where the data is kept and how secure it is.

NVIDIA DGX-2 solidifies leadership in AI development

During the opening two-plus-hour keynote to NVIDIA’s GPU Technology Conference in San Jose this week, CEO Jensen Huang made announcements and proclamations on everything from autonomous driving to medical imaging to ray tracing. The breadth of coverage is substantial now for the company, a dramatic shift from roots in graphics and gaming solely. These kinds of events underlie the value that NVIDIA has created as a company, both for itself and the industry.

In that series of announcements Huang launched a $399,000 server. Yes, you read that right – a machine with a $400k price tag. The hardware is aimed at the highest end, most demanding AI applications on the planet, combining the best of NVIDIA’s hardware stack with its years of software expertise. Likely the biggest customer for these systems will be NVIDIA itself as the company continues to upgrade and improve its deep learning systems to aid in development of self-driving cars, robotics, and more.

The NVIDIA DGX-2 makes the claim of being the world’s first 2 petaFLOPS system, generating more compute power than any competing server in a similar size and density.

The DGX-2 is powered by 16 discrete V100 graphics chips based on the Volta architecture. These sixteen GPUs have a total of 512GB of HBM2 memory (now 32GB per card rather than 16GB) and an aggregate bandwidth of 14.4 TB/s. Each GPU offers 5,120 CUDA cores for a total of 81,920 in the system. The Tensor cores that make up much of the AI capability of the design breach the 2.0 PFLOPS mark. This is a massive compilation of computing hardware.

The previous DGX-1 V100 system, launched just 6 months ago, ran on 8 GPUs with half the memory per GPU. Part of the magic that makes the DGX-2 possible is the development of NVSwitch, a new interconnect architecture that allows NVIDIA to scale its AI integrations further. The physical switch itself is built on 12nm process technology from TSMC and encompasses 2 billion transistors all on its own. It offers 2.4 TB/s of bandwidth.

As PCI Express became a bottleneck for multi-GPU systems that are crunching on enormous data sets typical of deep learning applications, NVIDIA worked on NVLink. First released with the Pascal GPU design and used with Volta as well, the V100 chip has support for 6 NVLink connections and a total of 300 GB/s of bandwidth for cross-GPU communication.

NVSwitch builds on NVLink as an on-node design and allows for any two pairs of GPUs to communicate at full NVLink speed. This facilitates the next level of scaling, moving behind the number of NVLink connections on a per GPU basis and allows for a network to be built around the interface. The switch itself has 18 links and is capable of eight 25 Gbps bi-directional connections. Though the DGX-2 is using twelve NVSwitch chips for connecting 16 GPUs, NVIDIA tells me that there is no technological reason they couldn’t push beyond that. There is simply a question of need and physical capability.

With the DGX-2 system in place, NVIDIA claims to see as much as a 10x speedup in just the 6 months since the release of DGX-1, on select workloads like training FAIRSEQ. Compared to traditional data center servers using Xeon processors, Huang stated that the DGX-2 can provide computing capability at 1/8 the cost, 1/60 the physical space, and 1/18 the power. Though the repeated line of “the more you spend, the more you save” might seem cliché, NVIDIA hopes that those organizations investing in AI applications see value and adopt.

One oddity in the announcement of the DGX-2 was Huang’s claim that it represented the “world’s largest GPU”. The argument likely stems from Google’s branding of the “TPU” as a collection of processors, platforms, and infrastructure into a singular device and NVIDIA’s desire to show similar impact. The company may feel that a “GPU” is too generic a term for the complex systems it builds, which I would agree with, but I don’t think co-opting a term that has significant value in many other spaces is the right direction.

In addition to the GPUs, the DGX-2 does includes substantial hardware from other vendors that act as support systems. This includes a pair of Intel Xeon Platinum processors, 1.5 TB of system memory, eight 100 GigE network connections, and 30TB of NVMe storage. This is an incredibly powerful rackmount server that services AI workloads at unprecedented levels.

The answer that I am still searching for is for the simple question of “who buys these?” NVIDIA clearly has its own need for high performance AI compute capability, and the need to simplify and compress that capability to save money on server infrastructure is substantial. NVIDIA is one of the leading developers of artificial intelligence for autonomous driving, robotics training, algorithm and container set optimization, etc. But other clients are buying in – organizations like New York University, Massachusetts General Hospital, and UC Berkeley have been using the first-generation device in flagship, leadership development roles. I expect that will be the case for the DGX-2 sales targets; that small group on the bleeding edge of AI development.

Announcing a $400k AI accelerator may not have a direct effect on many of NVIDIA’s customers, but it clearly solidifies the company’s position of leadership and internal drive to maintain it. With added pressure from Intel, which is pushing hard into the AI and machine learning fields with acquisitions and internal development, NVIDIA needs to continue down its path and progression. If GTC has shown me anything this week, it’s that NVIDIA is doing just that.