Notch Wars

on September 4, 2018
Reading Time: 4 minutes

Despite no longer being a hyper-growth category, smartphones are still a fascinating category to study. Not only because of the unprecedented impact they have on enabling humans of all shapes and sizes, races, and economic circumstances to engage in personal computing but also because of the global competitive strategies.

Tech Content Needs Regulation

on September 4, 2018
Reading Time: 4 minutes

It may not be a popular perspective, but I’m increasingly convinced it’s a necessary one. The new publishers of the modern age—including Facebook, Twitter, and Google—should be subject to some type of external oversight that’s driven by public interest-focused government regulation.

On the eve of government hearings with the leaders of these tech giants, and in an increasingly harsh environment for the tech industry in general, frankly, it’s fairly likely that some type of government intervention is going to happen anyway. The only real questions at this point are what, how, and when.

Of course, at this particular time in history, the challenges and risks that come with trying to draft any kind of legislation or regulation that wouldn’t do more harm than good are extremely high. First, given the toxic political climate that the US finds itself in, there are significant (and legitimate) concerns that party-influenced biases could kick in—from either side of the political spectrum. To be clear, however, I’m convinced that the issues facing new forms of digital content go well beyond ideological differences. Plus, as someone who has long-term faith in the ability of the democratic principles behind our great nation to eventually get us through the morass in which we currently find ourselves, I strongly believe the issues that need to be addressed have very long-term impacts that will still be critically important even in less politically challenged times.

Another major concern is that the current set of elected officials aren’t the most digitally-savvy bunch, as was evidenced by some of the questions posed during the Facebook-Cambridge Analytica hearings. While there is little doubt that this is a legitimate concern, I’m at least somewhat heartened to know that there were quite a few intelligent issues raised during those hearings. Additionally, given all the other developments around potential election influencing, it seems clear that many in Congress have been compelled to become more intelligent about tech industry-related issues, and I’m certain those efforts to be more tech savvy will continue.

From the tech industry perspective, there are, of course, a large number of concerns as well. Obviously, no industry is eager to be faced with any type of regulations or other laws that could be perceived as limiting their business decisions or other courses of action. In addition, these tech companies have been particularly vocal about saying that they aren’t publishers and therefore shouldn’t be subject to the many laws and regulations already in place for large traditional print and broadcast organizations.

Clearly, companies like Facebook, Twitter and Google aren’t really publishers in the traditional sense of the word. The problem is, it’s clear now that what needs to change is the definition of publishing. If you consider that the end goal of publishing is to deliver information to a mass audience and do so in a way that can influence public opinion—these companies aren’t just publishers, they are literally the largest and most powerful publishing businesses in the history of the world. Period, end of story.

Even in the wildest dreams of publishing and broadcasting magnates of yore like William Randolph Hearst and William S. Paley, they couldn’t imagine the reach and impact that these tech companies have built in a matter of a just a decade or so. In fact, the level of influence that Facebook, Twitter, and Google now have, not only on American society, but the entire world, is truly staggering. Toss in the fact that that they also have access to staggering amounts of personal information on virtually every single one of us, and the impact is truly mind blowing.

In terms of practical impact, the influence of these publishing platforms on elections is of serious concern in the near term, but their impact reaches far wider and crosses into nearly all aspects of our lives. For example, the return of childhood measles—a disease that was nearly eradicated from the US—is almost entirely due to the spread of scientifically invalid anti-vaccine rhetoric being spread across social media and other sites. Like election tampering, that’s a serious impact to the safety and health of our society.

It’s no wonder, then, that these large companies are facing the level of scrutiny that they are now enduring. Like it or not, they should be. We can no longer accept the naïve thought that technology is an inherently neutral topic that’s free of any bias. As we’ve started to learn from AI-based algorithms, any technology built by humans will include some level of “perspective” from the people who create it. In this way, these tech companies are also similar to traditional publishers, because there is no such thing as a truly neutral set of published or broadcast content. Nor should there be. Like these tech giants, most publishing companies generally try to provide a balanced viewpoint and incorporate mechanisms and fail safes to try and do so, but part of their unique charm is, in fact, the perspective (or bias) that they bring to certain types of information. In the same way, I think it’s time to recognize that there is going to be some level of bias inherent in any technology and that it’s OK to have it.

Regardless of any bias, however, the fundamental issue is still one of influence and the need to somehow moderate and standardize the means by which that influence is delivered. It’s clear that, like most other industries, large tech companies aren’t particularly good at moderating themselves. After all, as hugely important parts of a capitalist society, they’re fundamentally driven by return-based decisions, and up until now, the choices they have made and the paths they have pursued have been enormously profitable.

But that’s all the more reason to step back and take a look at how and whether this can continue or if there’s a way to, for example, make companies responsible for the content that’s published on their platforms, or to limit the amount of personal information that can be used to funnel specific content to certain groups of people. Admittedly, there are no easy answers on how to fix the concerns, nor is there any guarantee that legislative or regulatory attempts to address them won’t make matters worse. Nevertheless, it’s becoming increasingly clear to a wider and wider group of people that the current path isn’t sustainable long-term and the backlash against the tech industry is going to keep growing if something isn’t done.

While it’s easy to fall prey to the recent politically motivated calls for certain types of changes and restrictions, I believe it’s essential to think about how to address these challenges longer term and independent of any current political controversies. Only then can we hope to get the kind of efforts and solutions that will allow us to leverage the tremendous benefits that these new publishing platforms enable, while preventing them from usurping their position in our society.

Podcast: VMWorld 2018, Google Assistant, IFA Announcements

on September 1, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing VMWare’s VMWorld conference, chatting about new multi-language additions to Google Assistant, and analyzing a variety of product announcements from the IFA show in Europe, including those from Lenovo, Dell, Intel, Sony, Samsung and others.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Windows on ARM: Good Today, Better Tomorrow

on August 31, 2018
Reading Time: 4 minutes

I’ve spent the last few weeks using Lenovo’s Miix 630 detachable product that utilizes Qualcomm’s 835 Snapdragon processor running Windows 10 Pro (upgraded from Windows 10S). It hasn’t been an entirely smooth experience, and there is still work to be done on this new platform (especially regarding a few key apps). But this combination of Windows and ARM is undeniably powerful for a frequent business traveler such as me. Early challenges aside, it’s hard not to see Qualcomm, and eventually, the broader ARM ecosystem, playing a key role in the PC market down the road.

The Good
As I type this, I’m finishing up a New York City trip where I attended ten meetings in two days. I needed access to–and the ability to quickly manipulate–Web-based data during these meetings, a task that I’ve never been able to accomplish well on my LTE-enabled iPad Pro. So I typically bring my PC and a mobile hotspot so I can stay connected in Manhattan throughout the day. I carry my computer bag, too, because I need to carry the power brick because I invariably need to plug in my notebook at some point or risk running out of power before the end of the day. This time out, I left the mobile hotspot, power cord, and computer bag behind, carrying just the Miix. I used it throughout the day, both during meetings and in the times in between. The LTE connection was strong throughout, and I didn’t experience any performance issues. When I returned to the hotel room after 6 pm, after close to 11 hours of pretty much constant use, I checked the battery: 52%.

That’s a game changer, folks. It’s actually a bit hard to describe just how freeing it is to spend the day using a PC without worrying about connectivity or battery life. With battery-saver mode enabled, I could well have accomplished two days of meetings without needing a charge. Does everybody care about these things? Obviously, not. Would I swap this device for my standard PC where I perform more heavy workloads? No, not today.

But I’m beginning to think that day may be closer than many expect.

The Bad
I’ve come to realize that my most-preferred form factor for work-related tasks is a notebook (which is why I’m excited to see Lenovo has already announced plans for the Snapdragon-powered Yoga C630). That said, the Miix 630 is a solid detachable with a good display, somewhat oversized bezels, and a reasonably good keyboard. However, at $899 list, it is quite expensive for a device that what most people would use as a secondary computer. And it doesn’t help that Qualcomm announced the follow-on 850 chip before Lenovo had even begun shipping this product to customers.

And at present, this product—and other Windows on Snapdragon products—must remain secondary product because some limitations prevent it from being a primary PC for many users. Performance is one, although honestly I didn’t find the performance to be that limiting on this machine when using it for my described tasks (Lenovo seems to have done a good job of tuning the system). The main reason these products will have to serve as secondary devices is that there are still some deal-breaking app challenges. For me, the primary one was the fact that I couldn’t install and use Skype for Business, which is the primary way I communicate with my work colleagues and how my company conducts meetings. I was able to work around the meeting problem by joining meetings via the Web-based version of Skype for business, but there’s no way to do that for instant-messaging communication. I had a similar problem with Microsoft Teams, but there’s also a Web-based workaround for that program.

I understand the challenges Microsoft faces with making its ever-broadening portfolio of apps work on this new version of Windows, but the fact that I couldn’t use this important first-party app is pretty frustrating.

The Future
Microsoft still has some work to do in terms of app compatibility, but I’m hopeful the company will sort much of this out in the coming months. In the meantime, we now know that not only does Qualcomm have strong plans for future PC-centric chips, but ARM itself has now announced a roadmap that it promises will usher in next-generation chips from other licensees that should offer desktop-caliber performance with smartphone-level power requirements.

Of course, there are still plenty of other hurdles to address. Many IT organizations will push back on the idea of ARM-based PCs, with Intel understandably helping to lead that charge. There’s the ongoing issue of cost and complexity when it comes to carrier plans. Finally, there’s a great deal of education that will need to happen inside the industry itself around the benefits of this platform.

In the end, I’m confident that Windows on Snapdragon (and Windows on ARM more broadly) is going to eventually coalesce into an important part of the PC market, especially as 5G becomes pervasive in the next few years. I fully expect many long-time PC users to question its necessity, but I also expect a small but growing percentage of users to have the same types of “ah ha” moments that I did when testing out systems. And, perhaps most importantly, I believe future iterations of these devices are going to appeal a great deal to the next-generation of users who expect their PCs to act more like the smartphones and tablets they grew up using.

News You Might Have Missed: Week of August 31, 2018

on August 31, 2018
Reading Time: 4 minutes

Google Assistant is Now Bilingual

From Thursday this week, Google Assistant is bilingual. Users can jump between two different languages across queries, without having to go back to their language settings. Once users select two of the supported languages, English, Spanish, French, German, Italian and Japanese, from there on out they can speak to the Assistant in either language and the Assistant will respond in kind. Previously, users had to choose a single language setting for the Assistant, changing their settings each time they wanted to use another language, but now, it’s a simple, hands-free experience for multilingual households. Getting this to work, however, was not a simple, said Google. In fact, this was a multi-year effort that involved solving a lot of problems that can be grouped in three areas: Identifying Multiple Languages, Understanding Multiple Languages and Optimizing Multilingual Recognition for Google Assistant users. Google said to be working to teach the Google Assistant how to process more than two languages simultaneously.

Lenovo Yoga C630 WOS Laptop First with Snapdragon 850

on August 30, 2018
Reading Time: 3 minutes

More than a full year into the Windows on Snapdragon product life, the jury is still out on how well received the first generation of notebooks were. Qualcomm launched machines with three critical OEM partners: HP, ASUS, and Lenovo. All three systems offered a different spin on a Windows laptop powered by a mobile-first processor. HP had a sleek and sexy detachable, the ASUS design was a convertible with the most “standard” notebook capabilities, and the Lenovo Miix design was a detachable with function over form.

Reviews indicated that while the community loved the extremely long battery life that the Snapdragon platform provided, the performance and compatibility concerns were significant enough to sway decisions. Prices were a bit steep, if only when compared in a raw performance angle against Intel-based solutions.

Maybe the best, but least understood, advantage of the Snapdragon-based Windows notebooks was the always connected capability provided by the integrated Gigabit LTE modem. It took only a few trips away from the office for me to grasp the convenience and power of not having to worry about connectivity or hunting for a location with open Wi-Fi service in order to send some emails or submit a news story. Using your notebook like your smartphone might not be immediately intuitive, but now that I have tasted that reality, I need it back.

As a part of a long-term strategy to take market share in the Windows notebook market, Qualcomm announced the Snapdragon 850 processor in June during Computex in Taipei. A slightly faster version of the Snapdragon 845 utilized in today’s top-level Android smartphones, the SD 850 is supposed to be 30% faster than the SD 835 (powering the first generation of Always On, Always Connected PCs) while delivering 20% better battery life and 20% higher peak LTE speeds.

Those are significant claims for just a single generational jump. The 20% of added battery life alone is enough to raise eyebrows as the current crop of Snapdragon devices already provided the best battery life we have ever tested on a Windows notebook. The potential to get 30% better performance is critical as well considering the complaints about system performance and user experience that the first generation received. We don’t yet know where that 30% will manifest: single threaded capability or multi-threaded workloads only. It will be important to determine that as the first devices make their way to market.

Which leads us to today’s announcement about the Lenovo Yoga C630 WOS, the first notebook to ship with the Snapdragon 850 processor. The design of the machine is superb and comes in at just 2.6 pounds. It will come with either 4GB or 8GB of LPDDR4X memory and 128GB or 256GB UFS 2.1 storage, depending on your configuration. The display is 13.3 inches with a resolution of 1920×1080 and will have excellent color and viewing angles with IPS technology. It has two USB Type-C ports (supporting power, USB 3.0, and DisplayPort) along with an audio jack and fingerprint sensor.

When Lenovo claims the Yoga C630 WOS will have all-day battery life, they mean it. Lenovo rates it at 25 hours which is well beyond anything similarly sized notebooks with Intel processors have been able to claim. Obviously, we will wait for a test unit before handing out the trophies, but nothing I have read or heard leads me to believe this machine won’t be the leader in the clubhouse when it comes to battery life this fall.

Maybe more important for Qualcomm (and Arm) with this release is how Lenovo is positioning the device. No longer subjugated to the lower tier brand in the notebook family, the Snapdragon 850 iteration is part of the flagship consumer Yoga brand. The design is sleek and is inline with high-end offerings that are built around Intel processors. All signs indicate that Lenovo is taking the platform more seriously for this launch and the mentality should continue with future generations of Snapdragon processors.

I don’t want to make more of this announcement and product launch without information from other OEMs and their plans for new Snapdragon-based systems, but the initial outlook is that momentum is continuing build in favor of the Windows-on-Arm initiative. The start was rocky, but in reality, we expected that to be the case after getting hands on with the earliest units last year. Qualcomm was at risk that partners would back away from the projects because of it or that Intel might put pressure (marketing or product-based) on them to revert.

For now, that doesn’t appear to be the case. I am eager to see how the Lenovo Yoga C630 WOS can close the gap for Windows-on-Snapdragon and continue this transformative move to a more mobile, more connected computing ecosystem.

Semiconductor Foundry Battles

on August 30, 2018
Reading Time: 3 minutes

As an analyst with a background in the semiconductor industry (my first tech job was at Cypress Semiconductor), and heavily covering the semiconductor industry when I first joined the Analyst community in 2001, I was beginning to think this knowledge and expertise was nice to have but becoming less relevant. For a stretch, it seemed like the semiconductor industry was becoming stale. An industry that was once the pinnacle of enabling innovation felt as though it was an afterthought as smartphones, tablets, wearables, and more got all the attention.

Intel’s CEO Search-What to Watch for as Intel Picks New CEO

on August 29, 2018
Reading Time: 3 minutes

A Few months back, Intel pushed CEO Brian Krazanich out the door for a relationship discretion that happened before he became CEO. However, many of my sources tell me that there was a lot of other things that forced him out the door beside this publicized reason. I won’t get into what sources tell me about his leadership breakdown, but suffice it to say that his days were numbered for many purposes and it was only a matter of time before he would be fired.

Could Gen Z Pass on College?

on August 29, 2018
Reading Time: 4 minutes

As many of you know, I have a ten-year-old girl who is about to embark on the exciting adventure of homeschooling. Education is very important to us or maybe I should say knowledge is. Knowledge on a broad list of topics driven by her interest and not only dictated by what a particular curriculum decided it was good for her to know at a given age. Knowledge that treats facts as facts, but does not tell the story only from one viewpoint, one voice. Knowledge and call it street’s smart that will get her ready for life. While she is still a long way away from college, her dad and I have been discussing the pros and cons of sending her to college. For the longest time, college was the way you prepared your kids to the work world, it was the last step you took in that journey of enlightenment before you became an adult. Over the past couple of years, however, some, including me, have started to question whether the high cost of college education is worthwhile.

College Education has become a Consumer Product

I recently watched a Ted Talk by Professor Sanjay Samuel who very eloquently explained that college education has become unaffordable for most if not all American families. The combined debt that students in America hold amounts to over $1 trillion dollars and all in the name of securing a better-paid job. Education is an investment in a better future, they tell us and we tell our kids! What it has become though, is a way to categories future workers so companies can better assess them before hiring. Education is a consumer product where teachers are service providers, students are consumers and subject matter are content. The profit is with the loan companies but the return on investment for students is questionable. Education is being upsold, he says, so that college is the new high school. The question boils down to whether or not the higher pay one gets at the end of a college degree is worth the amount of debt that got you there and the years in lost wages. Economists say it is if you complete your degree, but Professor Samuel argues that the only reason that this is the case is that wages for people with only a high-school degree have been cut time and time again.

The solution to this problem according to professor Samuel is for parents and students to treat higher education like a consumer product, in the same way, the ones profiting from it have been doing for years. What if there were an app that could calculate for you an income-based tuition linking your future income to how much your tuition will cost? Tuitions, after all,  should not all be the same for all degrees. An engineering degree uses much more resources than a philosophy degree but the cost of tuitions is the same even if the engineering students will earn considerably more than the philosophy ones.

Supply and Demand

The solution to the education problem Professor Samuel presents is very intriguing and I for one, as a parent, would love to be able to help my daughter figure out her investment. With changes driven by technologies such as Artificial Intelligence and Machine Learning the “value” associated with the degrees might differ in the future. Sadly, though, education has become such a business that, as it is often the case when there is too much money at stake, things change slowly.

Companies are breaking with Past Requirements

Continuing the consumer market analogy, another problem I see with education is one of supply and demand. For many years, both here in the US and in Europe, corporations have been looking at degrees as the first pass for sieving through candidates. So to meet this demand of candidates with higher degrees, supply had to step up and more and more students took on college, masters, and doctorates. This created too much supply which caused many to be underemployed in their profession vis a vis the degreed they held.

When discussing our daughter’s future with my husband, this circle of supply and demand was my biggest concern in giving college a miss. What if when our daughter applied for a job she was the only one without a degree?

Fortunately, for my daughter and her Gen Z pals, things are changing and companies are breaking with past requirements. Recently, job-search site Glassdoor compiled a list of 15 top employers that have said to no longer require applicants to have a college degree. On that list, there are tech companies Google, Apple, IBM, retailers Home Depot, Lowe’s, Whole Foods and financial services Back of America and Ernst & Young.

Change might bring Less Dept, more Diversity, better Skills

The list of jobs offered by each company on this Glassdoor compilation is not as comprehensive as I would have hoped but it is hopefully the first step. Considering candidates with hands-on experience through boot-camps and vocational courses open up the opportunity to have a more diverse workplace by widening the hiring pool. Such a move is not just good for increasing diversity but also to ensure that the people you hire have the most up to date skills. Especially when it comes to technology things move so fast that your skills might be in need of a refresh by the time you end your four-year degree. Gen Z is very much a hands-on-generation already, just look at the many YouTube videos that teach how to do things from gaming to cooking. And you never know, if learning will not have to be linked to income alone, Gen Z might be so lucky to rediscover the pleasure to learn for learning’s sake, not for money’s sake.

Smart Speaker Growth, Alexa Screens vs. Google Screens

on August 28, 2018
Reading Time: 5 minutes

To date, approximately 65 million smart speakers have been sold world wide. Most of those to North America, but the rest of the world and China in particular, is catching up fast. The YoY growth in the market has been impressive. It is, without doubt, the fastest growing category in consumer electronics. In our own analysis of this category, I mentally include smart speakers in the smart home/smart home tech category which by itself just looking at connected home categories is growing quite fast. Put all this together, and the big picture observation is that smart home/connected home technology is exploding in growth and further evidence that it is the next consumer electronics battle ground. Here are two charts from my model to help visualize the market.

Survey: Real World AI Deployments Still Limited

on August 28, 2018
Reading Time: 3 minutes

You’d be hard pressed to find a topic that’s received more attention, been more closely scrutinized or talked about at greater length recently than Artificial Intelligence, or AI. Alternatively hailed as both the next big thing in technology—despite a multi-decade gestation period—and the biggest threat that the tech industry has ever created, AI and the related field of machine learning are unquestionably now woven into the fabric of modern life and are likely to remain there for some time to come.

Despite all the interest in the topic, however, there’s surprisingly little insight into how it’s actually being used in real-world applications, particularly in business environments. To help address that information gap, TECHnalysis Research recently engaged in an online survey of IT and other tech professionals in medium (100-999 employees) and large (1,000+ employees) US businesses to help determine how AI is being deployed in new applications created by these organizations.

After starting with a sample of over 3,700, the survey respondents were whittled down to a group of just over 500 who provided information on everything from what applications they were creating, the chip architectures they leveraged for inferencing and training, cloud platforms they utilized, the AI frameworks they used to build their applications, where they were deploying the applications now, where they planned to deploy them in the future, and much more. The full analysis of all the detailed data is still being completed, but even with some early topline results, there’s an important story to tell.

First, it’s interesting to note that just under 18% of the total original sample claimed to be either pilot testing or doing full deployments of applications that integrate AI technology. In other words, nearly 1 in 5 US companies with at least 100 employees have started some type of AI efforts. Of that group, 56% are actively deploying these types of applications and 44% are still in the development phase. Among companies in the sample group who are self-proclaimed early adopters of technology, an impressive 72% said they are using AI apps in full production environments. For medium-sized companies in the qualifying group, slightly more than 50% said they were in full production, but the number rises to just under 61% for large companies.

Equally interesting were the reasons that the remaining 82% of the total sample group are not creating AI-enhanced applications. Not surprisingly, cost was a big factor among those who were even considering the technology. In fact, 51% of that group cited the cost of creating and/or running AI applications as the key factor in why they weren’t using the technology. The second largest response, at almost 35%, came from those who were intrigued by the technology, but just weren’t ready to deploy it yet.

The third largest response of nearly 32% (note that respondents were allowed to select multiple factors, so the total adds up to over 100%) related to a real-world concern that many companies have voiced—they don’t have the in-house expertise to build AI apps. This isn’t terribly surprising given the widely reported skills gap and demand for AI programmers. Nevertheless it highlights both a big opportunity for developers and a huge challenge for organizations that do want to move into creating AI-enabled applications. The next most common response from this group, at 29%, was that they didn’t know how AI would be applicable to their organization, and another 26% cited not enough knowledge about the subject.

Both of these last two issues highlight another real-world concern around AI: the general lack of understanding that exists around the topic. Despite all the press coverage and heated online discussions about AI, the truth is, a lot of people don’t really know what AI is, nor what it can do. Of course, it doesn’t help that there are many different definitions of artificial intelligence and a great deal of debate about what really “counts” as AI. Still, it’s clear that the tech industry overall needs to invest a great deal more time and money in explaining what AI and machine learning are, what they can (and cannot) do, and how to create applications that leverage the technologies if they hope to have more than just a limited group of companies participate in the AI revolution.

From an industry perspective, it’s probably not surprising, but still interesting, to observe that almost 27% of respondents who were piloting or deploying AI apps came from the Tech industry. Given that tech workers make up less than 5% of the total workforce, this data point shows how much more the Tech industry is focused on AI technology than other types of businesses. The next largest industry at 13.3% was Manufacturing followed by Professional, Scientific and Technical Services at just under 10% of the respondents.

There’s a great deal more information to be culled from the survey results. In future columns I plan to share additional details, but even from the top-line findings, it’s clear that, while the excitement around AI in the business world is real, there’s still a long ways to go before it hits the mainstream.

Why Consumer VR Headsets Have Potential but Need a Killer App to Survive

on August 27, 2018
Reading Time: 3 minutes

From the first time I used a VR Headset, I was skeptical that it could ever become a consumer hit. The industrial strength models, such as the original Oculus Rift or the HTC Vive, were expensive and were tethered to a PC to work. While what they delivered in ways of VR functionality was exciting, it mostly garnered interest in gaming and for use in some verticals.

Samsung jumped in with their own Galaxy VR headset in which you put a Samsung phone inside, and a smartphone powered it. Early models were interesting and got some consumer uptake but never really took off.

Today we have some new VR headsets, most notably the Oculus GO and Lenovo’s Mirage Solo with DayDream in the $199-$399 price range in which the headset itself has the CPU and internal memory and delivers a stand-alone VR experience. These models are aimed squarely at consumers, and the companies behind these new VR products hope that these could finally cause the low end of the VR headsets to take off.

For the past two weeks, I have been using both the Lenovo Mirage Solo with Daydream and the Oculus Go and have enjoyed the experience. In my case, watch Netflix, and Hulu shows on them since it delivers a considerable screen viewing experience that is fun to watch. I also am an armchair traveler these days and the various shows that highlight different countries and points of interest are cool too.

Not much of this content is true VR. Hulu and Netflix have their apps on these devices so you can view their content on a big screen. Some of the travel apps have 3D 360 degree viewing features. On the other hand, the Disney VR snippets and some of the other apps deliver actual VR experiences where it puts you in the center of the action, and these apps show the real potential that a consumer VR headset can provide.

However, these dedicated VR apps are minimal today on these stand-alone consumer VR headsets. Which brings me to the real problem that needs to be solved if these are to take off. While many apps and travel sites deliver 360-degree views and in some cases do it in 3D, its the actual VR experience that could bring these headsets to more consumers.

For example, Disney has a few VR examples in their Oculus Go app that brings you right into the movie scene. In the dining scene from Beauty and the Beast, you are sitting at the head of the table while the plates, dishes and the candlestick dances in front of you and around you. In their Coco movie preview, you are on the stage with one of the lead characters as he sings and dances. Disney seems very committed to VR and over time is planning to convert more of their movies to VR and even create VR dedicated videos too.

There are also some specialty video sites that have created 3D VR styled videos in which they use a 3D camera and interject you into a specific scene. Then there are the VR games that plop you into the action and roller coaster type apps in which you feel like you are sitting in the roller coaster as it travels on its track and gives you the visual sensations you get when riding in a real roller coaster. (These are the apps that cause dizziness and nausea, and this particular problem needs to be addressed for any VR headset to gain broad acceptance)

I admit that I am enamored with these low end stand alone VR headsets and can waste many hours playing games, watching videos. Even though most are still 2D, and the VR apps are not plentiful, the experience, at least for a techie like me, is always fun. However, what exists in the way of 2D, 3D and VR content today makes it hard for a mainstream consumer to justify the cost at this point. Also, from using these for a while, I have not seen what I call a “killer app” for low-end VR headsets.

The higher end VR headsets the deliver high-quality gaming experiences are the killer app for that set of people. Also, in vertical markets, VR apps needed for people to do their jobs more effectively is the killer app for them
However, after viewing over 100 apps and videos on these low-cost stand-alone VR headsets, I cannot say that any one of them one drive me to buy one of these if I were a mainstream consumer.

There are some categories of apps that could be attractive to some audiences. Seniors might enjoy the travel apps and documentaries. Gen Z and some millennial’s might enjoy the gaming apps. Three are some useful educational apps and even ones that are great for meditating. Moreover, as I said above, I like watching Netflix and Hulu since I get the giant movie screen viewing experience with these services on a VR headset.

However, we need a killer app of some kind that is transformative and can get the interest of a broad consumer audience for these headsets to ever go mainstream. Until that happens, I am afraid the demand for these low cost, and self-contained VR headsets will remain tepid at best.

Nvidia Turing brings higher performance, pricing

on August 24, 2018
Reading Time: 3 minutes

During the international games industry show, Gamescom, in Cologne, Germany this week, NVIDIA CEO Jensen Huang took the covers off the company’s newest GPU architecture aimed at enthusiast PC gamers. Codenamed Turing and taking the brand of GeForce RTX, the shift represents quite a bit more than just an upgrade in performance or better power efficiency. This generation NVIDIA is attempting to change the story with fundamentally changed rendering techniques, capabilities, and yes, prices.

At its heart, Turing and GeForce RTX include upgrades to the core functional units of the GPU. Based on a very similar structure to previous generations, Turing will improve performance in traditional and current gaming titles with core tweaks, memory adjustments, and more. Expect something on the order of 1.5x or so. We’ll have more details on that later in September.

The biggest news is the inclusion of dedicated processing units for ray tracing and artificial intelligence. Much like the Volta GPUs that are being utilized in the data center for deep learning applications, Turning includes Tensor Cores that accelerate matrix math functions necessary for deep learning models. New RT cores, a first for NVIDIA in any market, are responsible for improving performance of traversing ray structures to allow real-time ray tracing an order of magnitude faster than current cards.

Both of these new features will require developer integration to really take advantage of them, but NVIDIA has momentum building with key games and applications already on the docket. Both Battlefield V and Shadow of the Tomb Raider were demoed during Jensen’s Gamescom keynote. Ray tracing augments standard rasterization rendering in both games to create amazing new levels of detail in reflections, shadows, and lighting.

AI integration, for now, is limited to a new feature called DLSS that uses AI inference locally on the GeForce RTX Tensor Cores to improve image quality of the game in real-time. This capability is trained by NVIDIA (on its deep learning super computers) using the best quality reference images from the game itself, a service provided by NVIDIA to its game partners that directly benefits the gamer.

There are significant opportunities for AI integration in gaming that could be addressed by NVIDIA or other technology companies. Obvious examples would include compute-controlled character action and decision making, material creation, and even animation generation. We are in the nascent stages of how AI will improve nearly every aspect of computing, and gaming is no different.

Pricing for the new GeForce RTX cards definitely raised some eyebrows in the community. NVIDIA is launching this new family at a higher starting price point than the GTX 10-series launched just over two years ago. The flagship model (RTX 2080 Ti) will start at $999 while the lowest priced model announced this week (RTX 2070) comes in at $499. This represents an increase of $400 at the high-end of the space and $200 at the bottom.

From its view, NVIDIA believes the combination of performance and new features that RTX offers gamers in the space is worth the price being asked. As the leader in the PC gaming and graphics space, the company has a pedigree that is unmatched by primary competitor AMD, and thus far, NVIDIA’s pricing strategy has worked for them.

In the end, the market will determine if NVIDIA is correct. Though there are always initial complaints from consumers when the latest iteration of their favorite technology is released with a higher price tag that last year’s model, the truth will be seen in the sales. Are the cards selling out? Is there inventory holding on physical and virtual shelves? It will take some months for this settle out as the initial wave of buyers and excitement comes down from its peak.

NVIDIA is taking a page from Apple in this play. Apple has bucked the trend that says every new chip or device released needs to be cheaper than the model that preceded it, instead increasing prices on last year’s iPhone X and finding that the ASP (average sales price) jumped by $124 in its most recent quarter. NVIDIA sees its products in the same light: providing the best features with the best performance, and thus, worthy of the elevated price.

The new GeForce RTX family of graphics cards is going to be a big moment for the world of PC gaming and likely other segments of the market. If NVIDIA is successful with its feature integration, partnerships, and consumer acceptance, it sets the stage for others to come into the market with a similar mindset on pricing. The technology itself is impressive in person and proves the company’s leadership in graphics technology, despite the extreme attention that it gets for AI and data center products. Adoption, sales, and excitement in the coming weeks will start to tell us if NVIDIA is able to pull it all off.

News You Might Have Missed: Week of August 24, 2018

on August 24, 2018
Reading Time: 4 minutes

Apple removes Facebook’s VPN app Onavo

Apple officials told Facebook last week that Onavo violated the company’s rules on data collection by developers, and suggested last Thursday that Facebook voluntarily remove the app. Facebook said in a statement that it’s transparent with Onavo users: “We’ve always been clear when people download Onavo about the information that is collected and how it is used,” the company said. “As a developer on Apple’s platform, we follow the rules they’ve put in place.”

The Great Tech Questioning

on August 23, 2018
Reading Time: 4 minutes

The past year has been a challenging one for tech, what with #metoo moments, security and privacy breaches, unseemly use of power, and certainly some missteps in the ‘fake news’/Russia meddling arena. And despite the seeming incongruity between these incidents/actions/behaviors and tech company earnings and sky-high valuations, there has started to be a reckoning, of sorts.

But I think there’s a bigger issue in play, one with greater potential long-term consequences. I call it the “Great Tech Questioning”. For the past 10-15 years, going back perhaps to the advent of the smartphone circa 2005, the talk has been about industries that have been ‘disrupted’. At first it was more about substitution, such as cellular replacing landlines, broadband smartphones replacing cameras and GPS units, digital media replacing physical media, and so on. Then it became more about entire industries being disrupted: photography, newspapers and magazines, retail, and so on. But more recently, the types of changes we’re seeing as a result of some of the most successful and fastest-growing companies in history are starting to have far broader business and societal consequences. And we’ve been caught largely flat-footed in terms of the longer-term ramifications and how to deal with them.

Let’s take four companies as examples. First, Uber. It plunged into a space that was ripe for disruption and rife with corruption. And though most of us love the service, Uber and its ilk grew so fast and so unchecked that we failed to assess the consequences: the significant increase in congestion in some cities, thus hampering one of ride-sharing’s key selling points of making it easier (and cheaper) to get from A to B. Another incongruity is that while Uber was initially hailed as a more favorable model for drivers, we underestimated the bottom falling out on medallion prices, which has affected hundreds of thousands of hard-working individuals.

Second, AirBnB. Still a great thing in many respects, but its rapid and largely unregulated growth resulted in its straying from its mission – and not really from any corporate wrongdoing. My wife and I were the initial ‘target’ AirBnB hosts. Sitting right between Boston College and Boston University, we’d rent a room out for $100 per night on our top floor, which was a godsend to parents visiting their kids in under-hoteled and over-priced Boston. This was the problem AirBnB was trying to solve. But then, developers, speculators, and opportunists swept in, killing rental inventory and disrupting the housing industry in already tight and expensive cities.

Third, Apple — as the poster child for the smartphone and its ‘ecosystem’. This wireless broadband pocket computer is indeed a modern marvel. It’s high level of usefulness was hugely evident on a recent vacation: helping us navigate our way, record beautiful places, stay in touch with work, friends and family, and enjoy media of many sorts. But this has also been a year where there have been serious questions about the effects of ‘screen addiction’. Many people have a really hard time applying the ‘everything in moderation’ mantra to their phones.

Finally, Facebook. Similar to the three examples cited above, it’s valuable and useful to hundreds of millions of people worldwide. But its unchecked growth, pursuit of profit, and poor corporate judgement have led to abuses of its platform, by the company itself and by myriad third party actors.

As a visceral reaction to this, we’ve seen a lot of questions being asked in 2018, and a giant ‘hey, slowdown’ come from numerous directions: Europe’s fining Google and implementing GDPR; the Zuckerberg hearings in Washington; the caps being placed on ride-sharing licenses in New York, and the various skirmishes being waged daily in locations worldwide; the backlash on ‘over-tourism’ and the attempt by some cities to impose some regulations on AirBnB; the stunning letter in January by two of Apple’s largest investors, reflecting concerns about the effects of technology and social media; and questions about IP theft, figuring into Qualcomm/Broadcomm, Huawei, ZTE, and so on.

The acceleration of big data and AI, combined with a turn toward the autocratic and authoritarian in some countries, are amplifying some of these concerns. This stuff can go from merely creepy to downright Orwellian in a hurry. In our heated conversation about immigration, for example, how long will it be before ICE snoops on individuals’ location data and messaging content?

I’m hoping that all this is the catalyst for some important conversations about the long-term effects of tech acceleration on the future of how we live, work, and get around. Some 27 million Americans are employed in the ‘gig economy’, according to a report I recently read…what happens to these people’s livelihoods, health care, and retirement, long term? Can ride sharing services become more of a conversation about the future of transportation than just ‘cheaper than a cab and better than a bus’?  Will the disruption being caused by AirBnB catalyze a conversation about the future of housing in the many cities facing a severe housing crunch? And can we adopt an ‘everything in moderation’ mantra on smartphones, and re-learn (or learn for the first time) some of the people navigation and long-form attention skills that were so essential before the crutch of our phones and e-everything?

There are no easy answers to these questions. But we might look back on 2018 as the year that some of these important conversations started in earnest.

Brands Bypassing App Stores and the Value of a Marketplace

on August 23, 2018
Reading Time: 4 minutes

An important debate is brewing. Along with this debate, an interestingly strategic arc for brands may be emerging. The discussion of how much a marketplace holder should receive is not new. It has become more heated as of late as reports that Netflix is looking at ways to avoid Apple’s cut of subscriptions generated from within the Netflix App. This move is not surprising as Amazon has been doing this year with digital content (only place Apple’s cut is applicable). I’m reminded of this every time I purchase a Kindle book on Amazon, which is about twice a quarter, and I have to leave the Amazon app, go to Amazon.com on Safari, buy my book for Kindle, then go back to iOS app to download and start reading my book. There is much friction in this process, and it is annoying, but I do not blame Amazon one bit.

Arm takes Aim at Intel’s Client PC Business

on August 22, 2018
Reading Time: 3 minutes

Arm held an analyst briefing last week in which they shared some details about new features coming to their ARM for their platform. While they often communicate with analysts just before they announce a new product, this was the first time in history they showed us a roadmap for the Arm core and detailed its evolutionary path through 2023.

Apple Must Reinvent the Genius Bar

on August 22, 2018
Reading Time: 4 minutes

Last week @mgsiegler wrote a post about his customer experience at an Apple Store. While the issue that brought him to a store is somewhat unique, his recount of long lines and wait time despite having an appointment was not that different from what I have heard pop up as a complaint from friends who are iPhone and iPad users and one that I have experienced myself on a couple of occasions.

There was a lot in the post, but I want to focus on one point I agree with, which is that Apple has reached such a scale that makes the current customer service model unsustainable.

Big Retail Stores are not the Model

Apple knew they had to scale issue when back in 2012 they hired John Browett, chief executive of Dixons Retail, a large chain of electronics stores in the UK. Browett was replacing Ron Jonson who had left Apple to become the CEO of J.C. Penney. Before Dixon, Browett had spent eight years at Tesco, a leading UK supermarket.

Clearly, Browett brought the understanding of large retail companies to Apple. However, as I had commented at the time, the high-quality customer care Apple’s customers were used to seemed to be at odds with the poor customer service Dixon’s was renowned for.

So it was no surprise when less than a year after he joined, Browett was let go. During his time at Apple, he was said to have focused on reducing employees in the attempt to cut payroll costs as well as general spent on the upkeep of the physical stores. In short, Browett was focused on growing profitability by teaching Apple stores to “run lean” as apparently he was quoted saying. But Apple stores are not about profits!

Tim Cook took over from Browett until he hired Angela Ahrendts in December 2014. Ahrendts, who was the first woman to join Apple’s executive team in almost a decade was given responsibility for both physical and online retail. In her previous role at Burberry, Ahrendts was able to turn the brand around making it relevant to the mainstream while retaining its luxury status. The challenge was not that different at Apple, where the stores had to be able to deal with more customers while continuing to make you feel you were the only one that mattered.

It’s not about Selling

When you read Ahrendts’ bio on the Apple’s website, and you think at some of the stores that were launched under her leadership from Chicago Michigan Avenue to Milan Piazza Liberty, it is easy to see she is delivering on the promise of what Apple retail is supposed to be:

“Since joining Apple in 2014, Angela has integrated Apple’s physical and digital retail businesses to create a seamless customer experience for over a billion visitors per year with the goal of educating, inspiring, entertaining and enriching communities. Apple employees set the standard for customer service in stores and online, delivering support from highly trained Geniuses and expert advice from Creative Pros to help customers get the most out of their Apple products.”

At a recent interview at the Cannes Lions, Ahrendts reiterated much of the same, pointing out that shopping is moving to online but that buyers will still go into a physical store to finalize their purchase. Because of this retail has to evolve.

Although revenues generated by the stores are growing, I have always argued that Apple stores are much more a marketing machine for Apple than they are a revenue one. Getting people in to fully immerse themselves in what being in an Apple world feels like is not new though. Ahrendts added more of a community focus to it at a time when, more often than not, tech companies are seen as damaging the community rather than enhancing it.

Evolving Customer Care

While all this is great for Apple overall branding, it seems to get in the way of current customers going to the stores to get support. Existing customers, especially long-term customers, have been accustomed to turn up at the store with whatever problem they had and have that resolved without even the need for an appointment. This excellent customer service is a big part of why people bought Apple.

As Apple’s customer base grew so did the need for support, a need that can no longer be fulfilled in the way it has been over the years. As Ahrendts points out, retail must evolve, and I would add that customer care must evolve too.

The Genius Bar which for the longest time has been the pride and joy of Apple can no longer be the first option for customer support. Apple’s website encourages customers to get support via phone, chat, email, even Twitter and of course there are authorized service providers. But come on, if I buy Apple, I want Apple to take care of me, right? I want to get to a store and feel I get the attention, love, and care I feel I pay for being a “special customer.”

It seems to me that Apple should come up with something that is as caring and personal of an experience than it was back in the day when I went into a store and met with my Genius Bar guru who knew everything about me and my device.

Today, through technology, Apple can deliver the same “boutique feel” thanks to a device that knows me and knows itself. Machine learning and artificial intelligence could help with self-diagnose, and an app or even Siri could walk/talk a user through some basic testing that would help assess whether I can fix it, need to go into a store or mail my device in. The Genius would move from the Bar to my device. Setting the right expectations from the start, avoid wasting time and eliminating friction overall while creating rapport with the brand was exactly what people liked about Apple’s customer service. The feeling of buying products from a company that put its customer first and that “Cheers – where everybody knows your name” factor that made Apple’s customer service second to none. Apple can do it again this time putting its technology first rather than its store staff.

While I realize my vision is not going to be delivered overnight, I believe that, if done well, this “Genius on device” would add even more value to Apple’s products and it would position Apple’s customer care as the industry benchmark once again.

Magic Leap, Tech’s Pause, and Patient Innovation

on August 21, 2018
Reading Time: 4 minutes

Magic Leap recently gave members of the tech press hand on demonstrations that led to a series of articles and reviews of the previously mysterious augmented reality headset. Before this, all the public saw were brief accounts from influencers or investors and a few demo videos from Magic Leap’s website. Last week, the cat officially came out of the bag.

Nvidia RTX Announcement Highlights AI Influence on Computer Graphics

on August 21, 2018
Reading Time: 3 minutes

Sometimes it takes more than just brute horsepower to achieve the most challenging computing tasks. At the Gamescom 2018 press event hosted by Nvidia yesterday, the company’s CEO Jensen Huang hammered this point home with the release of the new line of RTX2070 and RTX2080 graphics cards. Based on the company’s freshly announced Turing architecture, these cards are the first consumer-priced products to offer real-time ray tracing, a long sought after target in the world of computer graphics and visualization. To achieve that goal, however, it took advancements in both graphics technologies as well as deep learning and AI.

Ray tracing essentially involves the realistic creation of digital images by following, or tracing, the path that light rays would take as they hit and bounce off objects in a scene, taking into consideration the material aspects of the those objects, such as reflectivity, light absorption, color and much more. It’s a very computational intensive task that previously could only be done offline and not in real-time.

What was particularly interesting about the announcement was how Nvidia ended up solving the real-time ray tracing problem—a challenge that they claimed to have worked on and developed over a 10-year period. As part of their RTX work, the company created some new graphical compute subsystems inside their GPUs called RT Cores that are dedicated to accelerating the ray tracing process. While different in function, these are conceptually similar to programmable shaders and other more traditional graphics rendering elements that Nvidia, AMD, and others have created in the past, because they focus purely on the raw graphics aspect of the task.

Rather than simply using these new ray tracing elements, however, the company realized that they could leverage other work they had done for deep learning and artificial intelligence applications. Specifically, they incorporated several of the Tensor cores they had originally created for neural network workloads into the new RTX boards to help speed the process. The basic concept is that certain aspects of the ray tracing image rendering process can be sped up by applying algorithms developed through deep learning.

In other words, rather than having to use the brute force method of rendering every pixel in an image through ray tracing, other AI-inspired techniques like denoising are used to speed up the ray tracing process. Not only is this a clever implementation of machine learning, but I believe it’s likely a great example of how AI is going to influence technological developments in other areas as well. While AI and machine learning are often thought of as delivering capabilities and benefits in and of themselves, they’re more likely to provide enhancements and advancements to other existing technology categories by accelerating certain key aspects of those technologies, just as they have to computer graphics in this particular application.

It’s also important to remember that ray tracing is not the only type of image creation technique used on the new family of RTX cards, which will range in price from $499 to $1,199. Like all other major graphics cards, the RTX line will also support more traditional shader-based image rasterization technologies, allowing products based on the architecture to work with existing games and other applications. To leverage the new ray tracing capabilities, in fact, games will have to be specifically designed to tap into the ray tracing features—they won’t simply show up on their own. Thankfully, it appears that Nvidia has already lined up some big name titles and game publishers to support their efforts. PC gamers will also have to specifically think about the types of systems that can support these new cards, as they are very power hungry and demand up to 250W of power on their own (and a minimum 650W power supply for the full desktop system).

For Nvidia, the RTX line is important for several reasons. First, achieving real-time ray tracing is a significant goal for a company that’s been highly focused on computer graphics for 25 years. More importantly, though, it allows the company to combine what some industry observers had started to see as two distinct business focus areas—graphics and AI/deep learning/machine learning—into a single coherent story. Finally, the fact it’s their first major gaming-focused GPU upgrades in some time can’t be overlooked either.

For the tech industry as a whole, the announcement likely represents one of the first of what will be many examples of companies leveraging AI/machine learning technologies to enhance their existing products rather than creating completely new ones.

Should Facebook Create Vertical Channels to Survive?

on August 20, 2018
Reading Time: 4 minutes

When Facebook started out, it’s mission was to be a social network that would connect college classmates and friends and soon also caught on as a social medium to communicate with family and loved ones all over the world. While I was not a college student by any means when I joined Facebook a year after it launched, my reason for joining was to keep up with family, friends and business associates.

For the first five years of Facebook’s existence, this was who their primary audience was, and they catered to these peoples needs by adding contextual content and contextual ads. By 2010, their audience had hit close to 220 million users, and the types of people using the platform started to expand exponentially. Businesses, media outlets, brands, and other organizations, began to discover that Facebook was an excellent way to get to their customers and started taking Facebook from its social media roots to one that was more like a publishing platform for content that allowed people to interact directly with any Facebook member.

However, I believe that it was the Arab Spring in January 2011 where Facebook moved from a social media platform to becoming a full-fledged publishing platform. Since then it has expanded partnerships with all types of media publications, businesses and brands and gets most of its revenue from ads. However, because of the role, it played in the Arab Spring uprising, Facebook morphed even further into the world of politics and had allowed all types of players to make political comments, place political ads and spread fake news, much of it of political nature.

The Arab Spring ended up serving two purposes for those with political agendas. First, it allowed them to present a rallying cry for those supporting some political action and, in the case of the Arab spring uprising, it was a call to arms which toppled the leaders in Egypt.
However, it also gave those who had opposite positions a new vehicle to spread their agenda and took to Facebook to promote their views using any means possible, including propaganda and false news tailored to support their position.

I believe that under Facebook’s current rules and regulations, the role it plays in influencing political agenda’s cannot be solved under their current terms of service, and they need to adjust their rules around more publishing focused business models to continue to grow. If they served more like a publishing platform using the kind of journalistic regulations deployed by the top newspapers and magazine publishers today, they could get control of what type of material gets to their customers.

I realize this is very controversial, but I no longer believe Facebook can thrive under its current policies. For example, can you imagine Alex Jones ever being allowed to publish his content in the New York Times or the Wall Street Journal? He would never be allowed to do this because these publishers have a code of ethics and rules and regulations that drive what can and cannot be published on their pages. That is why I believe Facebook has to come to grips on their role as a publishing medium and put stricter controls in place to keep false news and propaganda off of their site the same way mainstream publishers control their content today.

Another way they could keep the company growing, even if they add stricter rules and controls around their main site, is to develop what I call vertical channels that become spin-outs from Facebook itself. If you look at Instagram, one of their properties, you could consider this a vertical channel now. Its focus is just on sharing pictures. Moreover, to a degree, its Oculus program with its dedicated apps and services can be viewed as a vertical channel too, although it will eventually play a key role inside Facebook’s virtual VR rooms in the future.

If you broadly scan Facebook today, you see posts from people showing off DIY projects, food and recipes, and all types of hobbies and interests. At the moment these are not organized or even grouped into dedicated like -mind programs. However, what if they were? What if Facebook had a channel just for those who love Italian Food and recipes and brought together people on Facebook to participate. It would attract ads form companies touting Italian food supplies, travel to Italy. As a diver, I would like to find like-minded diver friends where we can share our interests and see what’s new in dive gear and related products and services such as dive trip locations and diving holiday packages.

I realize there are dozens of these vertical sites for food, diving, and more, already. However, imagine if Facebook could tap into the special interests of its 2.5 billion users and bring millions of them together around a dedicated hobby or interest. It could drive even more targeted revenue and allow them to diversify beyond their current social media focus, which as I stated above, needs to be recognized as a publishing platform to give them more control over what content can and cannot be posted over their site.

Facebook has gone well beyond its social network focus and is much more than that for all types of peoples and groups. However, without stricter rules and regulations guiding its future, I do not think it can continue to grow any further. In my view, putting more controls in place that tracks the way publishers deal with the content allowed on their site would be the first step to stem the tide of people leaving the platform or becoming less engaged. However, adding vertical channels could be the ticket that could keep them growing while still serving and not angering their current users.

Podcast: NVidia Turing, ARM CPUs, AMD Threadripper, Intel AI

on August 18, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing major developments in the semiconductor industry, including the announcement of NVidia’s Turing GPU architecture and the company’s quarterly earnings, the debut of ARM’s CPU roadmap for PCs, the impact of AMD’s new Threadripper CPU and their datacenter plans, and Intel’s new AI developments.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Cortana and Alexa: The Next Step Forward for Voice

on August 17, 2018
Reading Time: 4 minutes

This week Amazon and Microsoft announced the rollout of Alexa and Cortana integration. First discussed publicly one year ago, the collaboration represents an important step forward for smart assistants today and voice as an interface in the future. I’ve been using Alexa to connect to Cortana, and Cortana to connect to Alexa, and while it’s clearly still in the earliest stages of development, it generally works pretty well. The fact that these two companies are working together—and other notables in the category are not—could offer crucial clues about the ways this all plays out over time.

Cortana, Meet Alexa
Enabling the two assistants to talk to each other is straightforward assuming you’re already using both individually. You enable the Cortana skill in the Alexa app and sign into your Microsoft account. Next, you enable Alexa on Cortana and sign into your Amazon account. To engage the “visiting” assistant, you asked the resident one to open the other. So you ask Alexa to “open Cortana” and Cortana to “open Alexa.” In my limited time using the two, I found that accessing Cortana via Alexa on my Echo speaker seemed to work better than accessing Alexa via Cortana on my notebook. Your mileage may vary.

One of the biggest issues right now is that it gets quite cumbersome asking one assistant to open the other so that you can then ask that assistant to do something for you. One of the reasons Alexa has gained such a strong following—and is the dominant smart assistant in our home (four Dots, two Echos, and two Fire tablets and counting)—is because it typically just works. The reason it just works is that Amazon has done a fantastic job of training we Echo users to engage Alexa the right way. It’s done this by sending out weekly emails that detail updates to existing skills as well as introducing new ones. Alexa hasn’t so much learned how we humans want to interact with her. Instead, we’ve adapted to the way she needs us to interact with her.

The issue with accessing Alexa through Cortana is that we lose that simplicity. I found myself trying to remember how I needed to engage Alexa while talking to the microphone on my notebook (Cortana). The muscle memory I’ve built around using Alexa kept getting short-circuited when I tried to access it through Cortana. I suspect this will self-correct with increased usage, but it’s obviously an issue today.

That said, even at this early stage, the potential around this collaboration is clear and powerful.

Blurring of Work and Home
We all know that the lines between our work lives and home lives are less clear than ever before. Most of us use a combination of personal and work devices throughout the day, accessing throughout the day both commercial and consumer apps and services. But when it comes smart assistants, the lines between home and work have remained largely unblurred. As a result, today Amazon has a strong grip on the things I do at home, from setting timers to listening to music to accessing smart-home devices such as connected lightbulbs, thermostats, and security systems. But Alexa know very little about my work life. Here, I’d argue, Microsoft rules, as my company uses Office 365, and Cortana can tap into my Outlook email and calendar, Skype, and LinkedIn among other things.

During my testing, I did things such as ask Alexa to open Cortana and check my most recent Outlook emails, or to access my calendar and read off the meetings scheduled for the next day. Conversely, I asked Cortana to open Alexa and check the setting of my Ecobee smart thermostat and to turn on my Philips Hue lights.

Probably the biggest challenge around this collaboration, once we get past the speed bump of asking one assistant to open another, is the need to discern individual users and then address their privacy and security requirements when working across assistants. Now that I’ve personally linked Alexa and Cortana, anyone in my house can ask Alexa to open Cortana and read off the work emails that previously were accessible only through Cortana (on a password-secured notebook). That’s a security hole they need to fill, and soon. The most obvious way to do this is for each of these assistants to recognize when I am asking for something versus when other members of my household (or visitors) are doing it.

Will Apple, Google, and Samsung Follow?
It makes abundant sense for Amazon and Microsoft to be first into the pool on this level of collaboration. While the two companies obviously compete in many markets, Cortana and Alexa represent an area where I’d argue both sides win by working together. I look forward to seeing where the two take this integration over the next few years.

But what about the other big players? Among the other three serving primarily English-speaking markets, I could imagine Samsung seeing a strong reason to cooperate with others. It’s Bixby trails the others in terms of capabilities, but the company’s hardware installed base is substantial. At present, however, it seems less likely that either Apple with Siri or Google with Google Assistant would be interested in joining forces with others. With a strong position on the devices most people have with them day and night (smartphones), both undoubtedly see little reason to extend an olive branch to the competition. Near-term this might be the right decision from a business perspective. But longer term I’m concerned it will slow progress in the space and lead to high levels of frustration among users who would like to see all of these smart assistants working together.

News You might have missed: Week of August 17th, 2018

on August 17, 2018
Reading Time: 3 minutes

Google to Open Retail Store in Chicago

Google is planning a two-level store in Chicago’s Fulton Market district, its first known location for a retail flagship. The technology giant is close to finalizing a lease for almost 14,000 square feet on the first and second floors of several connected, two-story brick buildings between 845 and 853 W. Randolph St., according to sources.