Intel’s Moment of Truth

on June 21, 2018
Reading Time: 4 minutes

News broke this morning that Intel’s CEO Brian Krzanich was forced out after the board discovered that he violated the company’s non-fraternization policy with another employee. While this seems like an odd way for him to go, it is a welcomed move by many employees, investors, and Intel watchers. Over the past year, I’ve turned extremely bearish on Intel.

DNA’s Role in Political Espionage and Beyond

on June 20, 2018
Reading Time: 3 minutes

One of the more fascinating stories to come out of the recent US and North Korean summit was the fact that N. Korean leader Kim Jong Un carried his toilet with him to the Singapore meeting with President Trump.

According to multiple accounts, Mr. Kim did this so that no person could get access to his stools to be able to test them for DNA and learn anything about his health. This may sound crazy but given modern DNA testing technology and the fact that we can learn a great deal about a person’s history as well as future health issues from these tests, this move by Mr. Kim does make some sense. He has to be very paranoid based on the damage he has done to N. Korean people and their economy. Keep in mind, he is a dictator and has to be in total control of everything, and this appears to trickle down to his toilet habits as well.

The VR App Stores’ Rocky Road

on June 20, 2018
Reading Time: 4 minutes

Over the past week, I have been playing with the Mirage Solo with Daydream, the standalone VR headset recently released by Lenovo. The headset uses the Daydream VR platform that has been available to the Daydream View Headset since 2017. The key difference with the Mirage Solo, as the name gives away, is that you no longer require a phone to experience VR. The Mirage Solo also does not need a PC like the HTC Vive or Oculus. It is, in fact, a direct competitor to the Oculus Go but uses a new technology called WorldSense that allows to track the world around you, or at least a good square meter or so of it.

Overall I felt that the Mirage Solo delivers a decent experience and I very much appreciate not having to worry about the phone overheating or running out of battery. I also felt the freedom from cables was a welcome improvement to my Oculus experience even though it did not take much moving around before WorldSense would ask to re-center the device. The peace of mind from walking around without worrying about tripping and the instant-on of wearing the headset and starting to enjoy content right away was a good start for me.

The Content and Devices Causality Dilemma

Content is where the Mirage Solo shows its weakness. The good news is that out of the box the Mirage Solo has access to all the Daydream apps that are available in the Google Play Store and the YouTube content. The bad news is that the Daydream apps are all there is.

The content is not bad, but it is limited. Some of it really does a disservice to the Mirage Solo as it lacks the quality someone investing $400 in the device would like to see. And this is the issue. Creating good quality content for VR is not cheap, and developers might be reticent to invest in doing so while the addressable market is limited and understandably so. Good quality content comes at a price, with apps that cost as much as $19.99. As users might first try free or cheaper content, the lack of quality might put them off spending more. I find this to be a problem for the Play Store in particular, as consumers have been historically spending less money, relying on free apps more than in the iOS App Store. Delivering ad-funded apps in VR might also be more complex if you want to keep true to the content or extremely annoying if you are not!

Lenovo smartly launched the Mirage Camera with Daydream so that users can create their immersive content by shooting videos that they can then enjoy with the Mirage Solo. That $300 price tag, however, might mostly appeal to early adopters.

While AR has similar issues with lack of compelling apps, users are not investing extra money in a device to try AR in the same way VR users do. It seems that the interim step of screen-less viewers is coming to an end and the industry wants to move towards standalone headsets for the mass market which makes content availability even more critical.

A Different Set of Rules

As I was trying different apps, I was also left wanting a different in-store purchase experience. With traditional apps, looking at the screenshots and reading the reviews is usually enough to get a sense of how good an app will be. I found that with VR there are way more variables at play.

The target audience age is the first thing you see when looking at purchasing an app, which is pretty straightforward. After that, you are given a sense of how much motion You will experience, which should be an indication of how sick you might feel if you do suffer from motion sickness. I do, and I found that the guidance was a bit of a hit and miss. Aside from those couple of points,  you really do not get a sense for how immersive the app will be both from a realistic perspective and an engagement one.

It seems to me that free trials are a must in a VR app store. Apple introduced the ability for developers to offer free trials for subscriptions apps in 2017, after resisting the idea for quite some time. This would work best for entertainment apps but not necessarily for all VR apps. The shift in spending from new apps to in-app purchase we have seen over the past couple of years within traditional app stores comes from many developers offering a free app and then opening up levels or features at a price. I am not sure this technique would necessarily work with VR where maybe a time-based approach might be preferable. You get ten minutes of the full experience before you are asked to pay for the app. Of course, developers can still open up levels and sell cheats but a watered down free version of the app might just not be compelling enough to get consumers to want more.

I also wonder if subscription services, similar to Xbox Live Gold, might be a good idea for power users, especially at this stage of market adoption when you want users to experience as much as possible and start evangelizing. Of course, big titles will build on the success of their traditional apps and might not need further help to reach success. Yet, I am hoping VR will open up the market to new titles and different experiences.

 

Overall I see the addressable market for VR coming from a blend of traditional gaming and mobile content consumption which spans from games to video to educational and productivity apps. The more opportunities to try good quality content mainstream users will have the more rapid the adoption will be as with VR trying is indeed believing.

The Coming of Super Bundles

on June 19, 2018
Reading Time: 4 minutes

I’ve written extensively about the growing trend of unbundling happening to the cable TV bundle. Voices in tech keep highlighting the cyclical nature of this trend where everything that was once bundled becomes unbundled only to be bundled again. The important observation we cannot escape is the inherent value in bundles. Bundles work for a variety of reasons but mostly because once a company has a billing relationship with a customer, it is effortless for them to layer value. So while we are currently in a partial phase of unbundling TV content, the reality is it will all become bundled again quite quickly. But the interesting new wrinkle I see coming is the rise of what I call the super bundle.

My Attempt to Switch From Mac to Windows

on June 19, 2018
Reading Time: 3 minutes

I recently wrote about my frustrations with my MacBook keyboard due, in my opinion, to Apple’s obsession with thinness. I found my MacBook keyboard to be just too difficult to use and unreliable, as well. Even after a replacement, random keys continue to become mushy and don’t reliably register. In speaking with friends using recent Macs I hear much the same issue.

For the first time in twenty years, it got me to consider moving to a Windows 10 notebook. I never expected that to happen, because I think the MacOS is elegant, easy to use and visually appealing. It also works well with the iPhone I use. The tipping point came with my spending 2 to 3 hours a day at the keyboard working on a new book. But when I casually looked at what alternatives were available, I was surprised by the latest crop of Windows notebooks.

Costco and the local Microsoft Store had computers from Lenovo, Dell, Microsoft and HP that were beautiful, lightweight, with none of the compromises found on the MacBooks. I had been under the impression that thin and light meant limited ports and a shorter battery life, but that’s not what I discovered.

I eventually picked a Lenovo Carbon X1 with its best quality 14-inch, 2560 x 1440 non-touch glossy screen. It’s spectacular – almost OLED like sharp, and intensely bright. The X! also had a full complement of ports, a memory card slot, and that terrific keyboard.

My biggest reservation in switching notebooks was moving from the MacOS to the Windows 10 operating system. It’s taken me almost two weeks to become comfortable doing most things under Windows, including a visit to the local Microsoft Store for a short class. Clearly, Microsoft is remiss by not offering the migration tools that Google and Samsung do to help iPhone users move to Android.

Switching means abandoning some of the apps that I’ve grown accustomed to on the Mac, such as Mail, Fantastical. Grab, and Contacts. I tried using Outlook for Windows, but in spite of watching YouTube videos from third parties and calls to Microsoft, I’ve not gotten it to work reliably.

I was able to access my Apple iCloud web client and its online apps, but they’re not very robust for frequent use. Fortunately, Apple offers a Windows app to access my iCloud drive, so my documents and photos were readily available. Office for Windows seems slightly better than the Mac version. I decided to use Google’s online calendar, contacts, and email clients. They’ve all improved over time, particularly the new email interface. But you’re still limited to Gmail accounts and I wasn’t able to add my Apple email account.

I found Windows 10 to be much improved compared to the last time I tried it using Windows 8. There are still vestiges of the old version with the large tiles that seem unnecessary and redundant, and there are hidden settings that take some searching to find, such as the Control Panel. But Windows OS also has much-improved aesthetics with a clean, clear interface with many intuitive features. The large Cortana search window provides a powerful search for help on the computer and the web.

I still prefer MacOS, which I’d rate a 90 vs an 80 for Windows, using my arbitrary wine rating scale. The Windows computer hardware, however, beats Apple by a larger margin, 95 vs 70. If I were an Apple MacOS software engineer, I’d be unhappy that my fellow hardware engineers are shortchanging the software by offering products that are well behind the competition. There’s no doubt in my mind that Apple has lost its edge with its latest line of notebook computers and is way behind the Windows offerings. I’m likely not telling them anything they really don’t know. Last time I was at the Apple Store to repair my keyboard, they suggested I’d be better off with a MacBook Air.

Why I am Willing to Give Apple my Health Data

on June 18, 2018
Reading Time: 4 minutes

One of the challenges of life, regardless of who you are, is the quest to remain healthy. I admit that in my youth, this was not on the top of my list of things to be concerned with. Even into my thirties, I pretty much lived a life of excess and worked way too many hours and traveled for work without any restrictions on my schedule.

At the age of 35, during an annual physical, I showed signs of high blood pressure and minor heart arrhythmia and was told I needed to change my lifestyle. I was also put on a mild BP drug. As I left that doctor visit I was a bit shocked at this news. I was young and felt invincible. But as I aged, and admittedly, I did not change my lifestyle that much given the warnings I received at age 35, my blood pressure issues got worse, my heart problems accelerated and by age 48, I was diagnosed with Type 2 diabetes. At age 62. I had a heart attack and underwent a triple bypass.

From a genetics standpoint, both my mother and father had blood pressure and heart problems and were pre-diabetic in their later years. However, as we now know, genetics only plays a portion of our health destiny while what we eat, our lifestyles and environmental issues have a real impact on our actual health outcomes at any stage of our lives.

While I was growing up, we had very few tools that could help us monitor our health outside of simple things like scales, blood pressure cuffs we could use at home and simple thermometers to read our temperatures. But these days we have home blood testing kits to check for various maladies. We have services that give us our DNA that includes all types of data about potential health problems that may lie ahead. We also have smartwatches and fitness bands that monitor our steps, heart rate and other activities that are then sent to apps like Apple’s Health app that gives us daily readings about various health data points. I even use the Dexcom G6 Continuous Blood Glucose monitoring system that gives me my blood sugar readings 24 hours a day, which I can see at glance on my Apple Watch.

One of the things that these new tech tools for health monitoring has done is given people of all ages many ways to self-check their health and monitor their overall health conditions. I am encouraged that even young people in their teens are using these health monitoring apps and using them early on to try and stay healthy. I am even seeing senior citizens using things like the Apple Watch and fitness bands, although we need to see more of them using these tools in the future as this generation is still a bit tech challenged.

There are many companies in tech that are creating all types of products to keep us healthy and monitor our overall health conditions. However, Apple has taken major leadership role in terms of their aggressive approach of using the iPhone and Apple Watch to monitor and collect health data. More importantly, they have created a set of tools that anonymously send that data to various health researchers, so they can use that data to create better treatments and medications to combat various diseases such as multiple sclerosis, heart disease, concussions, melanoma, Postpartum depression and sleep health for starters.

These tools are HealthKit and ResearchKit.

These tools have three objectives-

  1. Making medial research easier so understanding disease is simpler.

  2. To get more participants into the study so that researchers get more data, which leads to more meaningful results.

  3. Taking research out of the lab and into the real world.

Apple also has another important tool called CareKit, that is a software framework that allows developers to build medically focused apps that track and manage medical care.

As a professional market researcher, I understand how important data is to understand various aspects of the tech market I cover. But the kind of data I look for does not deal with life and death issues in a human sense. On the other hand, medical researchers desperately need as much data and information about a particular disease they are researching in order to better understand it and look for ways to treat it and ultimate defeat the disease altogether.

When Apple introduced the heart study last year, I was one of the first to sign up. As a heart patient for life, I clearly want to have the best solutions for dealing with this disease and if my heart data can help deliver better treatment for all, then I am all in. The data I send to Apple is anonymous and private. Consequently, I did not hesitate to participate in this study. In my discussions with others who have diseases that are tracked via Apple products and HealthKit and ResearchKit, they also seem to be very willing to send that data to researchers via Apple, as they too want to see better ways to treat and possibly cure their particular diseases.

Apple’s role in helping people track their health and then get that data to researchers can’t be underestimated. This is a big deal for Apple and more importantly, health researchers and professionals who need as much help as possible as they tackle the various health issues and diseases they study. I see this as being one of Apple’s greatest callings. In last Septembers keynote, Apple CEO Tim Cook stated that “healthcare is big for Apple’s future.”

I had a meeting with the retired CEO of a major health organization a few years back and well before Apple declared their strong commitments to health apps and products. In the meeting, he told me that he had been in talks with Apple about their ways of thinking about future health apps and services. Before he left my office, he made a prediction to me. He said, “Apple will emerge as the major company who will change the face of healthcare.” Given the timing of this meeting, which took place not long after Steve Jobs died, his prediction seems prophetic.

We are still in the early stages of this data impacting current research studies on the various diseases I mentioned above. Because these tools can be applied to all types of health conditions, I expect to see more studies taking advantage of Apple’s various health research tools and apps.

We should all be rooting for Apple to succeed with their health initiatives. Of course, it would be good for their business if they are successful, but it would be a bigger win for mankind if they succeed.

Telecom and Mobile Implications of the AT&T-Time Warner Deal

on June 15, 2018
Reading Time: 4 minutes

Yesterday, Judge Leon ruled that AT&T can acquire Time Warner. In this column, I’d like to discuss the broad implications of the deal, and more specifically what it means for the telecom and mobile landscape.

First off, congratulations to AT&T. They stuck to their guns and didn’t agree to any of the initial —  and unreasonable — DOJ terms to sell off piece parts of Time Warner to get the deal through. Hopefully, AT&T will be more successful with Time Warner than was AOL which, ironically now sits in the hands of arch-rival Verizon’ unfortunately named Oath.

Some of the benefits of the deal will be felt apparent to consumers within a few months. Expect some additional bennies and content bundles for AT&T wireless subscribers. HBO for free, a la T-Mobile Netflix? In the more medium term, marrying the huge Time Warner ad inventory with the insights on AT&T-DTV’s customers will create value. It will be a longer term project to build a more effective ad targeting platform, pulling together the content, ad inventory, and customer data in an effective – and responsible – manner.

AT&T will have to tread carefully. With the tech industry reeling from myriad episodes of inappropriate exposure/use of customer data, the $200 billion AT&T-Time Warner behemoth, which will still be under greater regulatory scrutiny than its Silicon Valley brethren, will have to be both careful and transparent with regard to how that customer data is leveraged. It will also have to abide by the near promises it made during the trial to not discriminate in the provision of Time Warner content to DTV rivals. That said, the TV and rights fees landscape is in turmoil and under pressure, so needles will have to be threaded here.

Against this backdrop, and with uncanny timing, net neutrality was officially repealed this week, smoothing the way for all of the above to be implemented.

The clarity of the ruling and its lack of conditions will help to unleash a wave of M&A activity in the media and content landscape. Most immediately, the bid for 21st Century Fox assets will heat up, with Comcast entering the fray.

I believe this will also ease the path for the T-Mobile/Sprint deal. Just as the TV market has changed hugely with OTT, streaming, and the impact of Netflix, Amazon, Apple, YouTube and so on, so too has the telecom business. Landline is all but dead, broadband is a near monopoly in 50% of the country, and demand for wireless data (driven by video) and the capex to support it remains near insatiable. It is hard to imagine T-Mobile and Sprint competing successfully, independently, and profitably with AT&T and Verizon, long-term. Especially with DISH’s spectrum, Comcast/Charter MVNOs, and possible entry of some Internet/Web giant into the space, as part of the mix.

I think T-Mobile and Sprint can successfully make the argument that the industry landscape has changed significantly since a deal was first broached a few years ago. The biggest benefit of 5G is capacity – in the form of spectrum breadth and depth, and cell site density. T-Mobile and Sprint will be able to do more together than they would do independently (1+1=3, as it were).

5G will be another beneficiary of this evolving telecom/media landscape. Verizon, AT&T, T-Mobile (Layer 3), and Comcast all have important content and video assets, which in addition to driving traffic growth, will also unleash innovation in apps, games, and so on that will form some of the business cases for 5G, such as in AR and VR. This thinking was on display last week at the AT&T Shape conference, which was held in Los Angeles at  – wait for it –  the Time Warner Studios lot (see my column on that here).

I also think that Verizon, Comcast, and AT&T getting more deeply into content and media will incent some of the major internet players, namely Google, Facebook, Amazon, Apple, and Netflix to be more masters of their own domain with regard to telecom and mobile. At the very least, it will drive the development of edge networking (and hence small cells/data centers) and 5G. One could also envision a deal for DISH’s spectrum, their participation in future spectrum auctions, leveraging Wi-Fi/unlicensed/3.5 GHz spectrum, or some level of MVNO relationship — or some hybrid of all of the above.

The telecom landscape will look less homogeneous going forward. Mobile-centric AT&T looks more like broadband-centric Comcast than it does Verizon. Verizon, with its leadership in 5G, emphasis on 5G FWA, and appointment of former Ericsson CEO Hans Vestberg as its next CEO, has taken a turn toward re-emphasizing the network. It is still in the early stages of truly leveraging its Oath asset, though if it is going to be a serious player in media/content/advertising, there’s more dealing to be done. T-Mobile and Sprint together look the most like a wireless pure play, though I could certainly see how Sprint’s 2.5 GHz spectrum could be leveraged as a potential competitor to broadband in some markets. And as part of the likely M&A acceleration in the telecom/media arena likely over the next year, one can’t imagine how DISH’s spectrum can lie fallow for much longer.

News You Might Have Missed: Week of June 15, 2018

on June 15, 2018
Reading Time: 4 minutes

Office 365 Gets a Redesign

This week Microsoft announced that they will introduce a series of changed to Office.com and Office 365. The changes are built on a lot of users’ feedback and aim to focus on simplicity and context.

The initial set of updates includes three changes:

Simplified ribbon– An updated version of the ribbon designed to help users focus on their work and collaborate. People who prefer to dedicate more screen space to the commands will still be able to expand the ribbon to the classic three-line view.

Mega Merger Mania

on June 14, 2018
Reading Time: 4 minutes

There is a bigger picture observation to be made in the wake of the AT&T and Time Warner merger/acquisition. It is an observation a long time coming as we have observed a number of larger merger/acquisitions already go down in the semiconductor industry with even more coming. I have continually been predicting the consolidation of the semiconductor industry and others have been making similar predictions about the media industry. It is worth looking at why this is happening and will continue to happen and what that may mean going forward for startups.

A Gaming Renaissance

on June 13, 2018
Reading Time: 4 minutes

There are a number of interesting trends emerging around video games worth observing. On the heels of the gaming industries biggest show of the year, E3, I thought it would be a good time to outline the broader trends I see happening worth watching.

Gen Z PC Gaming Growth
This is one of the bigger sleeper trends I’m watching. While I’m not ready to completely and boldly state that Gen Z is dumping consoles for PC gaming, it is certainly trending that way. I caught wind of this trend a few summers ago, when all of a sudden, more than a dozen friends or family from around the country asked me my opinion on an affordable notebook gaming PC for their high school boy who wanted to get a notebook for high school but also to play PC games. This peaked my interest, and upon further questioning, I found the gaming desire was driving by many of said teens friends starting to play more PC games and they wanted to start playing PC games online with their friends.

I chatted with over a dozen parents and it was the same story every time. Kid wanted notebook for school, kids friends were all starting to play more PC games online, so they wanted a gaming notebook for school and to play online with friends. I went on to ask all the parents I talked to about the gaming console. Nearly everyone had an XBOX or Playstation in the home and everyone said their kid, and their friends were playing it less and less and instead playing PC gamines online. In fact, in several instances, the parent (who is around my age 40, and was a big console gamer like I am/was) chuckled when they told me this anecdote “my son and his friends think console gaming is for old people.

It is relevant to trend to understand a game called PUBG (Players Uknown Battle Grounds). This game was single-handedly the reason teen males were flocking to PC games and leaving their consoles. Yes PUBG came to XBOX but that was not the case at the time. This game enlightened Gen Z about the faster pace of innovation in the PC gaming sector in both hardware and software. Every year your games can get richer and more immersive if you are willing to spend money on a new GPU, but similarly, new games are released and updated with new features faster than on consoles. All of this together makes for a compelling experience for this particular generation.

What I was seeing, with a single game, and social dynamic driving adoption of a gaming platform, was like watching a movie I’d seen before with the original XBOX. I had the privilege of doing some work with the original XBOX group, and the Halo phenomenon was remarkable at the time. For my demographic, Halo and playing online with friends in large battlegrounds able to battle each other as well as others, was a brand new experience and one that was responsible for the first XBOX’s rise to fame.

It is eerie the similarities I’m seeing for the motivation driving Gen Z to PC gaming to the rise of the original XBOX and console gaming with Gen Y/X.

Massive Multiplayer Games Going Mainstream
Another interesting trend is how a game like Fortnite may be leading the charge in bringing massive multiplayer online gaming to the masses. Fortnite is a more consumer-friendly version of PUBG and quickly rose to an amazing 2 million concurrent players and boasts around 3.5 million players monthly. While PUBG has similar numbers, Fortnite started on mobile and much of its growth has been people playing it on their smartphones and tablets.

I have a hunch the success of Fortnite, which proves consumers are comfortable playing large multi-player games on their smartphones, may open the floodgate for this type of gaming specifically in western markets. What many may not realize is this is common behavior in China with hundreds of millions of people playing online games on their smartphones and often in large groups. The genre varies that drives this behavior but I think we may have reached a tipping point where mobile gaming starts to become a driver of global MMO gaming.

This is exciting because we could see new innovation in games and gameplay. PUBG was a new innovation in game style, called Battle Royale, but added a twist which starts you off in a massive world but then forces the play area to shrink in order to bring players closer together leading to inevitable battles. It was a fascinating new dynamic that is now bein adopted by other games and game types. Fortnite may have opened Pandora’s Box to the mobile gaming opportunity globally and could lead a wave of new mobile game innovation in both genre and game dynamics.

I know I just covered two completely different ends of the gaming spectrum with both hardcore PC gaming and more approachable mobile gaming, but in some ways, they are related given the genre of Battle Royale is at the center of driving both trends. Ultimately, we may be seeing a new movement to truly massive global multiplayer games that are playable on all platforms. Imagine a game that every person in the world can play together in massive worlds no matter what device they have? High-end gaming PC, smartphone, tablet, basic notebook, console, streaming TV box, etc., all enabling a truly global gaming environment. This would be truly remarkable, but entirely possible, and whoever can crack this first would be sitting on a gold mine.

AMD Could Grab 15% of the Server Market, says Intel

on June 13, 2018
Reading Time: 2 minutes

Before the launch of its Zen-architecture processors, AMD had fallen to basically zero percent market share in the server and data center space. At its peak, AMD held 25% of the market with the Opteron family, but limited improvement in performance and features slowly dragged the brand down and Intel took over the segment, providing valuable margin and revenue.

As I have written many times, the new EPYC family of chips has the capability to take back market share from Intel in the server space with its combination of performance and price-aggressive sales. AMD internally has been targeting a 5% share goal of this segment, worth at least $1B of the total $20B market size.

However, it appears that AMD might be underselling its own potential, and Intel’s CEO agrees.

In a new update from analyst firm Instinet, the group met and spoke directly with Intel CEO Brian Krzanich and found that Intel sees the future being brighter for AMD in the data center. Krzanich bluntly stated that Intel would lose server share to AMD in 2018, which is an easy statement to back up. Going from near-zero share to any measurable sales will mean fewer parts sold by Intel.

Clearly AMD is not holding back on marketing for EPYC.

In the discussion, Krzanich stated that “it was Intel’s job to not let AMD capture 15-20% market share.” If Intel is preparing for a market where AMD is able to jump to that level of sales and server deployment then the future for both companies could see drastic shifts. If AMD is able to capture 15% of data center processor sales that would equate to $3B in revenue migrating from incumbent to the challenger. By no measurement is this merely a footnote.

For months I have been writing that AMD products and roadmaps, along with the impressive execution the teams have provided, would turn into long-term advantages for the company. AMD knows that it cannot compete in every portion of the data center market with the EPYC chip family as it exists today, but where it does offer performance advantages or equivalency, AMD was smart enough to be aggressive with pricing and marketing, essentially forcing major customers, from Microsoft to Tencent, to test and deploy hardware.

Apparently Intel feels similarly.

Other details in the commentary from Instinet shows the amount of strain Intel’s slowing production roadmap is causing product development. Intel recently announced during an earnings call that its 10nm process technology that would allow it to produce smaller, faster, more power efficient chips was delayed until 2019.

Krzanich claims that customers do not care about the specifics of how the chips are made, only that performance and features improve year to year. Intel plans updates to its current process technology for additional tweaking of designs, but the longer Intel takes to improve manufacturing by a significant amount, the more time rivals AMD and NVIDIA will be able to utilize third party advantages to improve market positions.

What I Learned from the Women in Technology Summit

on June 13, 2018
Reading Time: 4 minutes

This week I spent a couple of day at the Women in Technology Summit hosted by WITI. I was invited to moderate two panels and rather than just going in for those I decided to invest some time to listen to what other speakers had to say, to attend a workshop on how better to communicate with men and build allies and to network. Over the years, I have attended a few women in tech luncheons and breakfasts at broader industry events, but I usually shy away from women networking events marketed explicitly at women. This is mostly because I prefer to fight my way into events where the majority of attendees are men as this is, after all, what best reflects my day to day in tech. That said, I think there is power in conversations that happen in an environment where you feel it is safe to be open and this is precisely what the WITI Summit offered. There is power in sharing stories, opinions, openly talk about the challenges we face without being concerned of being judged and with the reassurance that more often than not, the person you are talking to is able to relate to what you are saying.

There are Many Smart Women in Tech

You often hear men complain about a shortage of women in tech. Not enough women to keynote at CES, not enough women in tech to follow on Twitter, not enough women in tech to invite as guests on their podcast. Time and time I see women making extensive lists of the talent that is out there if you are willing to look. And by look I mean, taking a quick look at these women are not hiding under a rock but they are openly visible doing their thing and demonstrating their awesomeness.

In case you are tempted to believe this shortage nonsense, let me tell you that at the WITI Summit there were over 100, yes one hundred, speakers, panelists, coaches and guess what, they were all women. I can bet that the organizers did not have to send out search parties to hunt them down either. What struck me was the quality of women on stage. They knew what they were talking about, many had science and engineering background, they were engaged with the audience, they were generous with their knowledge and time, and they genuinely wanted to make a difference.

Something that really struck me in listening to the speakers that the vast majority of them did not just tell a story and spoke hypothetically about a topic, whether the topic was a new technology like AI or the issue of diversity and inclusion in tech. They were prepared on the topic, talked with purpose and always left the audience something to reflect on. All by rarely mentioning their own personal achievements other than to make a point.

There was a Lack of Young White Women in the Conversation

As I was looking at the crowd in the sessions, I started to notice that the mix looked a little different from what I see at other tech events. Coming fresh from the round of developer events over the past couple of months and being used to see young white women making up a significant proportion of the female mix, I was stunned to find a considerable lack of millennial white women at the summit. There were many millennial minority women in the audience, but it was hard to see young white women.

I am aware that millennials are the group where minorities are becoming the numerical majority, but I think there might be something else going on there. I do wonder if, young white women share my feeling that we should find our place in industry events and not at events that are focused on women only. Maybe young white women are in general more comfortable when it comes to their place in tech thanks partly to the effort of those who came before them.

I hate to think that young white women do not want to be part of the conversation about diversity and inclusion. As a matter of fact, I find it hard to believe that is the case. I do wonder, however, if they might not think there is something to be learned from women who were the first in their company to become the CEO, or a lead engineering or product manager. Of course, the bigger point is that whether or not they think they can learn or they can benefit further from being part of the conversation is somewhat irrelevant. What I do hope, is that young white women understand they, like me, have a responsibility help and support other women and women from ethnic minorities.

The Best Pieces of Advice I heard:

As speakers shared their stories and coaches shared their knowledge, I was listening to find little nuggets of wisdom, and that is precisely what I found:

Ahalya Kethees founder of Lead with Brilliance said: You cannot be truly curious about someone if you are judgmental. I never thought about it this way, but it is true that if you are judging someone, it is hard to keep an open mind and wanting to know more about what they are talking about or who they are.

VP of Engineering at Autodesk, Minette Norman said, “ stay true to yourself, don’t try and be one of the boys.” I can really relate to this. I tried to fit in by being like one of the boys, but it just was not for me because it was not me. Over the years I found that being me, with my faults and quirks was the most effective approach to build a relationship with clients as well as colleagues.

Several of the speakers urged the audience to go and get a career coach. And apparently, according to a survey run by IDC across WITI members, a male coach would help you get a higher salary more so than a female coach would! Not a surprise when you think that women generally are not good at negotiating their contracts and assessing their worth!

Barbara Nelson GM & VP at Western Digital said: “Fight your own battle.” Yes, we need sponsors, and advocates, and allies but we need to be prepared to speak up, ask the hard questions and fight our own battles.

Lastly, I leave you with my action point: amplify women’s voices. Highlight when one of your female colleagues says or does something smart, retweet and follow other women in tech, stop a male colleague when he interrupts a woman in a meeting, so she gets to finish talking. Let’s not fight among ourselves to get a seat at the table let’s bring in a chair for someone else when we get there!

The Business of Business Software

on June 12, 2018
Reading Time: 3 minutes

When most people think about software for business, they tend to think of things like Microsoft Office. After all, Office is the application suite that many of us spend a great deal of time in during our work days.

In reality, however, productivity suites like Office only represent a small portion of the overall market for software used in businesses and other large enterprises. Some of the biggest categories are things like Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and Business Intelligence (BI) and analytics. In addition, there are millions of custom applications (many of which are built with these types of tools as a foundation) that play an extremely important role in the operation of today’s businesses.

While Microsoft is an important player in many of these categories, it’s companies like IBM, SAP, Oracle, and Salesforce that are the leaders in many of these lesser-known segments that are commonly referred to as “back office” operations (a historical phrase that stems from many business organizations having the operational teams doing this work physically located in the rear section of office buildings). In fact, companies like SAP have built large businesses creating the tools and platforms that sit at the central operational point for many organizations in areas ranging from supply-chain management to human resources and other personnel systems.

At last week’s SAPPHIRE NOW, SAP’s annual customer conference, the company announced a major entry into the “front office” CRM market with C/4 HANA. The new offering ties together the technology from a number of different acquisitions it has made to create a suite of applications and cloud services that allows sales and marketing people (who typically sat in the “front” part of office buildings) to organize all the critical information about their customers in a single place. C/4 HANA builds on the company’s existing in-memory HANA database architecture, which stores all data and applications in server memory (versus in storage) to speed overall performance.

What’s interesting about the release is the position it holds in the overall evolution of the enterprise software market. For several decades, companies like SAP were strongly associated with old legacy software that ran only in the physical servers within a company’s data center—or “on premise,” as many like to say. The applications were large, monolithic chunks of code that were so complicated, they almost always required external help from large consulting firms and system integrators, or SIs (such as Accenture, CapGemini, the services division of IBM, etc.), to properly install and deploy.

Over the last decade or so, however, we’ve seen companies like SAP and IBM evolve their software architectures and approaches, in large part because of the dramatic rise of cloud-based software companies such as Salesforce.com. The efficiencies, flexibility, and cost-savings enabled by these internet-based business software companies and the new business models they offered—such as Software as a Service (SaaS), Platform as a Service (PaaS), etc.—forced some dramatic changes from the traditional enterprise software vendors. In particular, we saw a dramatic increase in the use of public cloud platforms, such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud, to host and run applications that traditionally only ran in corporate data centers. In addition, we’ve witnessed the dramatic increase of enterprise mobile applications that provide a means to run or interact with business software on our smartphones and other mobile devices.

The new C/4 HANA release is an intriguing example of these many developments because it is a cloud-first set of tools that companies can now run in the public cloud across any of these major cloud platforms, in their own private cloud within their data center, or in a combined “hybrid” cloud model. Architecturally, the suite incorporates a large number of microservices—a dramatically different and more modular structure than older monolithic applications—that offers much more flexibility in terms of how the software can be leveraged, updated, and enhanced. In particular, the ability to do things like plug-in new enhancements such as AI and machine learning via SAP’s Leonardo suite of new technologies is indicative of the new approach the company is taking with its software offerings.

At this year’s SAPPHIRE NOW, SAP also announced an SDK (software development kit) that will allow native access to all their services from Google’s Android platform for mobile access. This builds on the work that the company had previously done for iOS and Apple devices.

Even with all these enhancements and long-term evolutionary progress, there’s still no question that the bulk of enterprise software offerings can still be extremely complex and difficult to completely decipher. However, it is also clear that tremendous progress is being made and that, in turn, is helping companies who use these tools improve their efficiencies and enhance the digital readiness of their organizations. As the business environment continues to advance, it’s good to see the toolmakers who’ve supported these companies taking the steps necessary to make these digital transformations possible.

Are Scooters Fulfilling Segway’s Original Dream?

on June 12, 2018
Reading Time: 4 minutes

At the end of the last century, the tech world was in a flutter around an unannounced product that appeared to be revolutionary. Dean Kamen, one of the smartest and brightest inventors in the last 50 years, was reportedly working on a new device that was cloaked in secrecy. It had received great the attention in 1999 from VC John Doer, who speculated that it would be bigger than the Internet, and Steve Jobs, who originally said it would be bigger than the PC, although he retracted that statement after the Segway came out and was critical of it once it shipped.

Qualcomm Announces New Snapdragon for PCs, Kills Partners’ Near-Term Prospects

on June 8, 2018
Reading Time: 3 minutes

At this week’s Computex show in Taiwan Qualcomm announced the next generation of silicon for the Windows on Snapdragon platform. The new chip is called the Snapdragon 850, and rather than simply repurposing an existing high-end smartphone processor the company has cooked up a modified chip specifically for the Windows PC market. Qualcomm says the new chip will provide a 30 percent system-wide performance boost over the previous generation. I’m pleased to see Qualcomm pushing forward here, as this area will eventually evolve into a crucial piece of the PC market. However, announcing it now, with an eye toward new products appearing by year’s end, puts its existing hardware partners in a very tough spot.

Tough Reviews, And a Short Runway
Qualcomm and Microsoft officially launched the Windows 10 PCs powered by the Snapdragon Mobile PC Platform in December 2017. The promise: By using the Snapdragon 835 processor and related radios, Windows notebook and detachable products would offer instant startup, extremely long battery life, and a constant connection via LTE. Initial PC partners included HP, Lenovo, and ASUS.

Reviews of the three initial products have been mixed at best, with many reviewers complaining about slow performance, driver challenges, and app compatibility. But most also acknowledge the benefits of smartphone-like instant on, the luxury of connectivity beyond WiFi, and battery runtimes measured in days versus hours. I’d argue that the technical issues of rolling out a new platform like this were unavoidable. However, the larger self-inflicted wound here was that nobody did a great job of articulating who these products would best serve. This fundamental issue led to some head-scratching price points and confused marketing. I talked about the missed opportunity around commercial users back in December.

There was also the issue of product availability. While the vendors announced their products back in December, shipments didn’t start until 2018. In fact, while HP’s $1,000 Envy X2 started shipping in March, neither Lenovo’s $900 Miix 630 nor ASUS’s $700 NovaGo TP370QL is widely available even today. Amazon recently launched a landing page dedicated to the Always-Connected Windows 10 PC with a bundled option for free data from Sprint for the rest of 2018. The ASUS product moved from pre-order to available on June 7; Lenovo’s product still has a pre-order button that says it will launch June 27th.

That landing page appears to have gone live just days before Qualcomm announcing the 850 in Taiwan, and promising new hardware from partners-including Samsung-by the end of the year. Now, if I’m one of these vendors who threw support behind Windows on Snapdragon early, only to have Qualcomm Osborne my product before I’ve even started shipping it, I’m not a happy camper.

Might as Well Wait
As a frequent business traveler, the Windows on Snapdragon concept is very appealing to me. I realize that performance won’t come close to what even lower-end X86 processors from Intel and AMD offer, but I’m willing to make that trade for the benefits. As a result, I expect that for the first few years these types of PCs will be better as companion/travel devices rather than outright replacements for a traditional PC. In my case, I could see one competing for space in my bag with the LTE-enabled iPad Pro I carry today. Except when I carry the Pro, I still must carry my PC because there are some tasks I can’t do well on iOS.
Both the Lenovo and HP products are detachable tablets, whereas the ASUS is a convertible clamshell, which is the form factor I’m most eager to test. I was close to pulling the trigger on the ASUS through Amazon when the Qualcomm 850 news hit. Buying one now seems wasteful, with new, improved product inbound by the holidays. And that’s not the kind of news vendors want to hear.

Now many will say that this is the nature of technology, that something new is always coming next. And while that’s essentially a true statement, this move seems particularly egregious at a time when Qualcomm and Microsoft are trying to get skeptical PC vendors to support this new platform. Plus, we’re not talking about a speed bump to a well-established platform, this is a highly visible initiative with an awful lot of skeptics within the industry. Qualcomm might have decided that the poor initial reviews warranted a fast follow up; one hopes their existing partners were in on that decision.
Bottom line: I continue to find the prospects of Windows on Snapdragon interesting, and I expect the new products based on the 850 chip will perform noticeably better than the ones running on the 835. But if Qualcomm and Microsoft expect their partners to continue to support them in this endeavor, they’ve got to do a better job of supporting them in return.

News You Might Have Missed: Week of June 8th, 2018

on June 8, 2018
Reading Time: 4 minutes

The Sonos Beam Provide Options to Broaden Appeal

This week, Sonos launched Sonos Beam, a $399 soundbar that will be available on July 17. Out of the box, Beam comes with Amazon Alexa in the US, UK, Germany, Canada, Australia, New Zealand and soon, in France. Beam will support additional voice assistants as they become available on Sonos around the world and won’t lock owners into specific streaming boxes or services. AirPlay 2 will be available on Sonos in July via a free software update. With AirPlay 2, customers can play music and podcasts from their iOS devices directly on their Sonos speakers, including the new Sonos Beam, Sonos One, Playbase, and the second generation Play:5. And, with a single supported speaker, AirPlay content can be streamed to other Sonos speakers in the system. Customers enjoying AirPlay 2 on Sonos will also gain a new voice experience with the addition of Siri. Ask Siri to play any track, album, or playlist on Apple Music by using an iOS device to start playing on Sonos.

Intel and AMD both dive into many-core CPU race

on June 7, 2018
Reading Time: 4 minutes

It seems not long ago that 2- and 4-core processors were at a seemingly unmovable status in the consumer CPU market. Both Intel and AMD had become satisfied with four cores being the pinnacle of our computing environments, at least when it came to mainstream PCs. And in the notebook space, that line was weighted lower, with the majority of thin and light machines shipping from OEMs with dual-core configurations, leaving only the flagship gaming devices with H-series quad-core options.

Intel first launched 6-core processors in its HEDT (high end desktop) line back in 2010, when it came up with the idea to migrate its Xeon workstation product to a high-end, high-margin enthusiast market. But core count increases were slow to be adopted, both due to software limitations and because the competition from AMD was minimal, at best.

But when AMD launched Ryzen last year, it started a war that continues to this day. By releasing an 8-core, 16-thread processor at mainstream prices, well under where Intel had placed its HEDT line, AMD was able to accomplish something that we had predicted would start years earlier: a core count race.

Obviously AMD didn’t create an 8-core and price it aggressively against Intel’s options out of the goodness of its heart. AMD knew that it would fall behind the Intel CPU lineup when it came to many single threaded, single core tasks like gaming and productivity. To differentiate and to be able to claim performance benefits in other, more content creation heavy tasks, AMD was willing to spend additional silicon. It provided an 8-core design priced against Intel’s 4-core CPUs.

The response from Intel was slower than many would have liked, but respond it did. It launched 6-core mainstream Coffee Lake processors that closed the gap but required new motherboards and appeared to put Intel out of its expected cadence of release schedules.

Then AMD brought out Threadripper, a competitor that it had never had previously to go against the Intel X-series platforms. It doubled core count to 16 with 32-threads available! As a result, Intel moved forward its schedule for Sky Lake-X and released parts up to 18-cores, though at very high prices by comparison.

Internally, Intel executives were livid that AMD had beat them to the punch and had been able to quickly release a 16-core offering to steal mindshare in a market that it had created and lead throughout its existence.

And thus, the current many-core CPU races began.

At Computex this week, both Intel and AMD are beating this drum. The many-core race is showing all its glory, and all of its problems.

Intel’s press conference was first and it had heard rumblings that AMD might be planning a reveal of its 2nd generation Threadripper processors with higher core counts. So it devised an impressive demonstration of a 28-core processor running at an unheard of 5 GHz on all cores – it’s hard to understate how impressive that amount of performance is. It produced a benchmark score in a common rendering test that was 2.2x faster than anything we had seen previously in a single socket, stock configuration.

This demo used a previously unutilized socket on a consumer platform, LGA3647, built for the current generation of Xeon Scalable processor. This chip also is a single, monolithic die, which does present some architectural benefits over AMD multi-chip designs if you can get past the manufacturing difficulties.

However, there has been a lot of fallout from this demo. Rather than anything resembling a standard consumer cooling configuration, Intel used a water chiller running at 1 HP (horsepower), utilizing A/C refrigerant and insulated tubing to get the CPU down to 4 degrees Celsius. This was nothing like a consumer product demo, and was more of a technology and capability demo. We will not see a product at these performance levels available to buy this year, and that knowledge has put some media, initially impressed by the demo, in a foul mood.

The AMD press conference was quite different. AMD SVP Jim Anderson showed a 32-core Threadripper processor using the same socket as the previous generation solutions. AMD is doubling the core count for its high-end consumer product line again in just a single year. This brings Threadripper up to the same core and thread count as its EPYC server CPU family.

AMD’s demo didn’t focus on specific performance numbers though it did compare a 24-core version of Threadripper to an 18-core version of Intel’s currently shipping HEDT family. AMD went out of its way to mention that both the 24-core and 32-core demos were running on air-cooled systems, not requiring any exotic cooling solutions.

It is likely AMD was planning to show specific benchmark numbers at its event, but because Intel had gone the “insane” route and put forward some unfathomably impressive scores, AMD decided to back off. Even though media and analysts that pay attention to the circumstances around these demos would understand the inaccuracy of comparison, it would have happened, and AMD would have lost.

As it stands, AMD was showing us what we will have access to later in Q3 of 2018 while Intel was showing us something we may never get to utilize.

The takeaway from both events and product demos is that the many-core future is here, even if the competitors took very different approaches to showcase it.

There are legitimate questions to the usefulness of this many-core race, as the software that can utilize this many threads on a PC is expanding slowly, but creating powerful hardware that offers flexibility to the developer is always a positive move. We can’t build the future if we don’t have the hardware to do it.

Reading the WWDC Tea Leaves for Siri, Mac and iPad

on June 7, 2018
Reading Time: 4 minutes

As I articulated earlier in the week, Apple’s focus on features that help us be more productive and efficient may not have been the most exciting when it comes to future and brand new things, however, there were some signals Apple gave us worth pondering about what the future may hold.

The Unseen Impact of iOS Apps on the Mac

on June 6, 2018
Reading Time: 3 minutes

One of the most significant questions often asked by the Apple faithful had been whether the Mac OS and iOS would ever be merged. At WWDC Apple executives addressed this exact question and forcefully stated that this would never happen.

However, they went on to say that there is legitimate merit to enable developers by making it easier to bring iOS apps over to macOS for use on a Mac and become part of their continuity story. This means that if you use the new Voice Memo app on the Mac that is also on iOS, whatever you record on the Mac version is also available on the iPhone and iPad instantaneously.

Apple No Longer Tells Users What Is Best For Them

on June 6, 2018
Reading Time: 4 minutes

At the end of the keynote at Apple’s Developer Conference on Monday, there were two areas where I thought Apple clearly decided it was not up to them to tell their users what they should and should not do: Siri Shortcuts and Screen Time.

Over the years, Apple has been criticized for deciding what was best for their users: color scheme on your Mac, U2 album in your library and slowing down your old iPhone to preserve battery. In all these cases, users did not like that a decision was made for them so how could they appreciate Apple telling them how best to take advantage of Siri, manage their time and parent their children?

Apple thought it was more useful to provide tools to users so they could decide how to do all those things better. Such a change might come from a shift in company philosophy. I do think, however, it is more likely to have come from the realization that Apple’s users are today as diverse as they have ever been. For Apple to find a middle ground between my mom, my daughter, and I is no easy task.

Siri Shortcuts Aimed at Pros to benefit the Masses

Apple has been trying to figure out how to talk about AI, ML, and Siri over the past year or so. Siri used to be voice, and other “smartness” that was happening on the iPhone was not necessarily called out. With the introduction of the A10 Fusion and A11 Bionic, Apple started to be more explicit in calling out AI and ML enabled capabilities. On Monday, however, rather than talk about AI, Apple was focused on positioning Siri as an assistant that helps you even when you do not talk to her, just like a human assistant would.

Digital assistant adoption is still in its infancy, and so is understanding and embracing of AI. Assistants are also suffering from users having to learn how to communicate with them. While some are more flexible than others, we are all trying to determine the exact way to ask them to do something for us and let’s face it, we are still far off from natural language.

The introduction of Siri Shortcuts seems to try and bypass these issues. Siri Shortcuts are a way for users to put together a phrase to either do one task like “find my keys” or a chain of actions like “morning routine” which set an alarm, checks the traffic and reminds me to order coffee. Siri will proactively also suggest Shortcuts based on your behavior. Behavior that is very different across the large user base Apple has today. Siri Shortcuts put the “burden,” for lack of a better word, of the set up on the user which is indeed not for everyone. I would expect. However, heavily engaged users will spend the time in setting them up, and as they see the return, they will do more. In a way, I think of these users as being similar to those who spent time fine-tuning their Apple Watch to become a complement to their iPhone rather than a replica of it.

Apple will learn from this early adopters and could feed data into ML models to create the most popular shortcuts for a broader set of users. The whole “Siri is behind” rhetoric is, after all, impacting more the engaged users than those who are interested in using Siri to set up a timer.

Screen Time Empowers You through Data

Digital health has raised to the attention of many over the past few months and companies are starting to respond. Apple, similarly to Google, is providing tools to raise awareness of what we all do with devices. While a lot of the attention has been on the well-being of kids, adults too could benefit from a little less screen time, and I sure know I could. Apple took a two-pronged approach. On the one hand it has made Do Not Disturb more efficient and broader, and on the other, they added Screen Time which, similarly to Google Dashboard, gives you a lot of information about how you use your apps. Siri also steps in helping you managing notifications which are a big part of what attracts you to look at your phone in the first place. The way you engage with those notifications will provide Siri with a clue on how important those are and will help suggest how best to set them up.

We live however in a free-will world, so Apple is not shutting things down for you. Users are in control and they should self-manage. I am a little skeptical about adults really making changes for the better, but maybe I am just projecting my own fear about me being able to change.

Where I do think there is a lot of potential is in Screen Time and kids. I always maintained that, as a parent, it is my responsibility to manage my child’s screen time, but I welcome any vendor to give me tools to help me do that. What I like about Apple’s Screen Time is that I can teach my daughter to be responsible about device time like she is responsible for others things in her analog life: feeding the bearded dragon, keeping track of her belongings at school and cleaning up her toys. I want my child to look at the Screen Time report and responsibly learn to self-manage. I want her to understand that it is not just about time with the device, it is about how you use that time. There is a difference between reading books, writing your journal, or drawing on your iPad and spending hours on Snapchat or YouTube. I do not expect her to get there straight away, but I think that having more self-awareness will undoubtedly help.

 

Overall I felt Apple focused on practical improvements to the experience users have today. It was not all sexy, truth be told most of it was not, but this does not mean it will not all help grow engagement and loyalty.

 

Platforms of Efficiency

on June 5, 2018
Reading Time: 4 minutes

As is so often in the world, the technology industry is cyclical. Looking back at the past two decades of developer conferences, and precisely what each platform company releases as new features to their platforms, we can divide the ebbs and flows of platform feature into two buckets. The first bucket contains elements that are truly new, and enable new use cases and behaviors. The second bucket contains features that build on existing features and make them better and more useful for users of the platform. In one cycle, a platform has an opportunity to show us the future, and in other cycles, a platform has an opportunity to help us be more productive and efficient.

Siri Shortcuts Highlights Evolution of Voice-Based Interfaces

on June 5, 2018
Reading Time: 3 minutes

To my mind, the most intriguing announcements from this year’s Apple Worldwide Developer Conference (WWDC) was the introduction of Siri Shortcuts. Available across iOS devices with iOS12 and Apple Watches with WatchOS 5, Siri Shortcuts essentially adds a new type of voice-based user interface to Apple devices.

It works by building macro-like shortcuts for basic functions across a wide variety of applications and then gets them to execute by simply saying the name of your custom-labelled function to Siri. Critically, they can be used not just with Apple apps and iPhone or iPad settings, but across applications from other vendors as well.

Early on, most digital assistant platforms, such as Siri, Amazon’s Alexa, and the Google Assistant, focused on big picture issues like answering web-based queries, scheduling meetings, getting updates on quick data nuggets like traffic, weather, sports scores, etc. Most assistant platforms, however, didn’t really make your smart devices seem “smarter” or, for that matter, make them any easier to use.

With the introduction of Samsung’s Bixby, we saw the first real effort to make a device easier to use through a voice-based interaction model. Bixby’s adoption (and impact) has been limited, but arguably that’s primarily because of the execution of the concept, not because of any fundamental flaw in the idea. In fact, the idea behind a voice-based interface is a solid one, and that’s exactly what Apple is trying to do with Siri Shortcuts.

At first glance, it may seem that there’s little difference between a voice-based UI and traditional assistant, but there really is. First, at a conceptual level, voice-based interfaces are more basic than an assistant. While assistants need to do much of the effort on their own, a voice-based UI simply acts as a trigger to start actions or to allow more easy discovery or usage of features that often get buried under the increasing complexity of today’s software platforms and applications. It’s a well-known fact that most people use less than 10% of the capabilities of their tech products. Much of that limit is because people don’t know where to find certain features or how to use them. Voice-based interfaces can solve that problem by allowing people to simply say what they want the device to do and have it respond appropriately.

Given the challenges that many people have had with the accuracy of Siri’s recognition, this more simplistic approach is actually a good fit for Apple. Essentially, you’ll be able to do a lot of cool “smart” things with a much smaller vocabulary, which improves the likelihood of positive outcomes.

Another potentially interesting development is the possibility of its use with multiple digital assistants for different purposes. While I highly doubt that Apple will walk away from the ongoing digital assistant battle, they might realize that there could be a time and a place for, say, using Cortana to organize work-related activities, using Google Assistant for general data queries and using Siri for a variety of phone-specific functions—at least in the near term. Of course, a lot questions would need to be answered and API’s opened up before that could occur, but it’s certainly an intriguing possibility. Don’t forget, as well, that Apple has already created a connection between IBM’s Watson voice assistant and iOS, so the idea isn’t as crazy as it may first sound.

Even within the realm of a voice UI, it makes sense to add some AI-type functions. In fact, Apple’s approach to doing on-device machine learning to help maintain data privacy makes perfect sense, with a function/application that lets you use the specific apps installed on your device and provides suggestions based on the contacts and/or other personalized data stored in your phone. This is where the line between assistant and voice UI admittedly starts to blur, but the Apple offering still makes for a more straightforward type of interaction model that its millions of users will likely find to be very useful.

As interesting as the IFTTT (If This Then That)-like macro workflows that Siri Shortcuts can bring to more advanced users, however, I am a bit concerned that mainstream users could be a bit confused and overwhelmed by the capabilities that Shortcuts offers. Yes, you can achieve a lot, but even from the brief demo onstage, it’s clear that you also have to do a lot to make it work well. By the time it’s officially released as part of iOS12 this fall (as a free upgrade, BTW), I’m hoping Apple will create a whole series of predefined Siri Shortcuts that regular users can quickly access or easily customize.

The world of voice-based interactions continues to evolve, and I expect to see a number of advancements in both full-fledged assistant models, voice-based UIs, and combinations of the two. Long-term, I believe Siri Shortcuts has the opportunity to make the biggest impact on how iOS users interact with and leverage their devices of anything announced with iOS12, and I’m really looking forward to seeing how it evolves.

Client Hardware and Business Transformation

on June 4, 2018
Reading Time: 4 minutes

Last month I had the privilege of attending Dell Technology World in Las Vegas where the overriding theme was Business Transformation. This term is being used a lot these days to explain the overall shift from a PC Centric IT world to one where the Cloud sits at the center of an IT universe, and the client can be anything from a PC to a tablet, smartphone, and even IOT connections. It also speaks to the integration of essential tools that provide high-level security, collaboration and many other elements needed for IT to deliver a more seamless way for individuals to work more effectively within their organizations to be more productive.

There is no question that we are moving to a brave new world where anyone who works within an IT organization, whether it is a big one or a small one, is demanding that the tools they use as clients are the ones they are most comfortable with whether it be one that supports Windows, Mac OS or IOS, or Android or Chrome.

Over the years I have worked on well over 100 IT integration projects as well as served as the co-chair of the largest CRM conference in the US. While I understand the overall enterprise space, my specific role in these projects was mostly focused on the client area, and I served as the advocate for the actual user.

In the past, I evaluated hundreds of laptops and dozens of smartphones that were needed in these IT programs and in many cases laptops that were under consideration for various projects. In these projects, I would put myself in the place of the intended user, and looking at the goal and scope of the project, would make recommendations for what type of client would be best for various individuals to meet the needs of both the user and the IT director. I have helped influence buy decisions for up to 50,000 laptops in multiple IT projects over the years and continue to make these kinds of recommendations on all types of enterprise projects today.

With that in mind, I have been thinking a lot about the current state of the workstyles of what has become a more mobile workforce and the kind of tools they need to be more effective as part of any business transformation. More specifically, I have been looking closer at my own needs in client-based technology for me to be more effective in my job.

In this process, I have discovered that my workflow is much like the average knowledge worker today. Today’s workers are very mobile and use things like laptops, tablets, and smartphones as part of their daily activities. For all of us, the most important device is the one that is needed for the specific task we are doing at any given time. Knowledge workers sometimes work at their desk, and other times they are involved in conference room meetings or will take the laptop, tablet or smartphone with them to lunch or some other off-site venue.

However, it turns out that in most cases the laptop is the real workhorse of the knowledge worker and in my case, two essential additional technologies have dramatically impacted my productivity. These come in the form of docking stations or connectivity to various I/O inputs and large monitors. Most laptops screens are in the 12″ to 15″ range, and when working at a desktop for hours at a time, a large monitor has become an even more important tool that enhances my workflow and overall productivity.

Although larger monitors from 19″ inches to 29″ give users more screen space to work with, I found that using a 34″ widescreen monitor is the most useful new tool that has enhanced my productivity. In my case, I use the Dell UlatraSharp 34 inch, curved Monitor.

While I have used large monitors connected to my laptop for two decades, I was surprised how a 34″ curved monitor truly impacted the way I work. Because I have so much more screen real estate to work with, I can put three different applications on the screen to work with at any given time. In my case, the left third has my email; the center has the application I am working on at any given time, and the right third has a Web browser that gives me constant and immediate access to info I may need when writing, researching, keep up with news, etc.

I cannot emphasize enough how something as simple as a large widescreen monitor has impacted the way I worked and enhanced my overall productivity. I don’t say this lightly, but it has changed the way I worked and made working at my desk a pleasure.

Ironically, when most prominent tech companies talk about business transformation and especially the client area, they mostly focus on the role the laptop, tablet, and smartphone plays. But I would argue that in every case where a knowledge worker also spends serious time at their desk, they also need to help their customers understand that docking stations and large monitors can also be an essential tool in the business transformation process.

As an advocate for users in IT projects, I have now been suggesting that adding docking stations and larger monitors to the mix need to be considered and I feel that they add much to the users who especially spend a lot of time at a desk using their laptops.

The Map Platform and the Map First UI

on June 1, 2018
Reading Time: 4 minutes

I recently had an experience that led me to a few important observations. My family and I went to Disneyland and had our first opportunity to use the parks newest app. What struck me about Disneyland’s app, and the work they put into making an incredible app that greatly increases the customer experience at Disneyland, was how the entire app is a map first UI. Here is a screenshot to take a look at, and then I’ll break it down.