How will Our Screen Addiction Change?

A Nielsen Company audience report published in 2016 revealed that American adults devoted about 10 hours and 39 minutes each day to consuming media during the first quarter of 2016. This was an increase of exactly an hour recorded over the same period of 2015. Of those 10 hours, about 4½ hours a day are spent watching shows and movies.

During the same year, the Deloitte Global Mobile Consumer Survey showed that 40% of consumers check their phones within five minutes of waking up and another 30% checks them five minutes before going to sleep. On average we check our phones about 47 times a day, the number grows to 82 times if you are in the 18-24 age bracket. In aggregate, the US consumers check their phones more than 9 billion times per day.

Any way you look at it, we are totally addicted to screens of any form, size, and shape.

While communication remains the primary reason we stare at our screens there are also tasks such as reading books or documents, playing card or board games, drawing and writing that we used to perform in an analog way that are now digital. And, of course, there is content consumption. All adding up to us spending more time interacting with some form of computing device than with our fellow human beings in real life.

I see three big technology trends in development today that could shape the future of our screen addiction in very different ways: ambient computing, Virtual Reality and Augmented Reality.

Ambient Computing: the Detox

Ambient computing is the experience born from a series of devices working together to collect inputs and deliver outputs. It is a more invisible computer interaction facilitated by the many sensors that surround us and empowered by a voice-first interface. The first steps of ambient computing are seen in connected homes where wearables devices function as authentication devices to enable experiences such as turning the lights on or off or granting access to buildings or information. The combination of sensors, artificial intelligence, and big data will allow connected and smart machines to be much more proactive in delivering what we need or want. This, in turn, will reduce our requirements to access a screen to input or visualize information. Screens will become more a complement to our computing experience rather than the core of it.

In order to have a feel for how this might impact average screen time, think about what a device such as a smartwatch does today. While a screen is still involved, it is much smaller and it shows the most important and valuable information to you without drawing you into the device. I often talk about my Apple Watch as the device that helps me manage my addiction. It allows me to be on top of things, without turning each interaction into a 20-minute screen soak. Another example is the interaction you might have with Google Home or Alexa when you inquire about something. Today, for instance, I asked for the definition of “cabana” as my daughter wanted to know. I got what I needed in less than 30 seconds: “a cabin, hut, or shelter, especially one at a beach or swimming pool.” Had she gone on Google Search to find the definition, I guarantee it would have taken a good 10 minutes between reading through the results and looking at pictures, with the effectiveness of the search not being any better because of the screen.

While not a total cure, ambient computing could provide a good detox program that will allow us to let go of some screen time without letting go of our control.

Virtual Reality: the Ultimate Screen Addiction

Virtual Reality is at the total opposite spectrum of Ambient Computing as it offers the ability to lose yourself in the ultimate screen experience. While we tend not to talk about VR as a screen the reality is that, whatever experience you are having, it is still delivered through a screen. A screen that rather than being on your desk, on your wall, or in your hand is on your face through a set of glasses of various shapes.

I don’t expect VR to be something we turn to for long period of times, but if we have ever complained about our kids or spouses having selective hearing when they are gaming or watching sports we got another thing coming!

There are talks about VR experiences that are shared with friends but if multiplayer games are something to go by, I am expecting those “share with friends moments” to be a minor part of the time you will spend in VR. With VR being so much more immersive I think the potential to be in an experience with someone you do not know like you do with traditional gaming, might be a little more involved and overwhelming. Coordinating with actual friends might require too much effort worth making if you are experiencing a music or sports event but maybe not so much if you are just playing a game.

Escapism will be the biggest drive for consumer VR which is the biggest reason for wanting to be cut off from reality.


Augmented Reality: the Key to Rediscovery

Augmented Reality is going to be big, no question about it. Now that Apple will make it available overnight to millions of iPhones and iPads as iOS 11 rolls out, consumers will be exposed and engaged with it.

What I find interesting is the opportunity that AR has to reconnect us to the world around us. If you think about the big excitement around Pokemon Go was that people went outside, walked around, exercised. Because humans do not seem to be able to do anything in moderation, that feel good factor vanished quickly as people were paying more attention to the little creatures than to where they were going culminating in incidents of trespassing, injuries, and fights!

That said, I strongly believe that there are many areas that AR can help with in our rediscovery of the world around us from education, to travel, to nature. Think about Google Translate and how it helps lower the barrier of entry to travel in countries where you do not speak the language.

The trick will be not to position the AR experience as the only experience you want to have. AR should fuel an interest to do more, discover more, experience more.

Of course, developers and ecosystem owners are driven by revenue rather than the greater good of humanity. Yet, I feel that lately the level of concerns around the impact technology is having on our social and emotional skills is growing enough to spur enough interest to drive change.

Ultimately, I believe that our addiction to screens is driven by a long list of wrong reasons. Our obsession feeds off our boredom, our feeling that idle time is unproductive time, and the false sense of safety of connecting to others through a device rather than in person. New technologies will offer the opportunity to change if we really want to.

It is not Women in Tech, it is Women in Business We Should talk About

I have been thinking about writing something on women in tech and what we have been witnessing over the past few months and I had resisted thus far. Last week, however, as I was a guest on the DownloadFM podcast I was asked my opinion about the many stories we have read in the press concerning childish CEO behavior and continued allegations of sexual harassment, starting from Uber to 500 Startups, and I could no longer shy away. After all, I am in tech, I am a woman, and I have an opinion on the topic.

We expect more from Men in Tech

Women face discrimination, chauvinism, and harassment in pretty much any business they are in. For some reason, however, I think the disbelief around some of the stories that have emerged in tech comes from assuming that men in tech would be different, evolved, better. Better than the men who run Wall Street and I better than the men on Capitol Hill. That hope is buoyed by the fact that men in tech are by and large well-educated, well-travelled, they are entrusted with building our future. Men in tech are also by and large white and entitled and often with poor social skills when it comes to women. Of course, there are exceptions, but they are, alas, exceptions.

You start to believe it is You, not Them

I have been a tech analyst for 17 years, and while I have seen more women in tech, I still get excited when there is a line for the ladies’ bathroom at a tech conference. I still pay attention to how long it takes for a woman to be on stage at those tech conferences. And while it seems that all the big corporations have increased the number of women on stage, if you pay attention, you notice, that most of those women on stage are performing demos and they are not upper management.

When I got pregnant with my daughter, female as well as male colleagues, told me that my priorities would change and I would not work as hard. I was expecting it from my male colleagues, but it was disappointing to hear from my fellow female colleagues that it was expected of me to want to do less. The implication of course, if I did not feel that way, was that I was a bad mother.

In many occasions, I was told I was emotional; I was asked if it was that time of the month; I was told to grow a pair. In meetings, I have been interrupted and talked over by endless male colleagues, mistaken for my colleague’s secretary and right out ignored after making the mistake to serve coffee to meeting guests. At the start of the smartphone market, I was handed over pink phones with a lipstick mirror. I’d love to ask Walt Mossberg if he ever reviewed one of those! On Twitter for complementing an actor’s launch of a tech product, I was told I was “throwing my knickers” at him. I have been the token woman on tech panels, and I was invited as a guest on a radio show because “the audience responds better to women talking tech.” And the list goes on.

Things like this happen all the time to many women. They happen so often that you start to think it is the norm, or that you are reading it wrong and taking it personally. Whether you think it is wrong or not becomes irrelevant though when you consider how hard you worked to get to where you are and how much further you want to go. So, you ignore it, you smile, and move on. You do what Irish reporter Caitriona Perry did in the Oval Office a few weeks ago.

 

Avoiding Discrimination 3.0

If things have not changed up to now why is it important that they do? Why does it matter so much that men in tech must understand enough is enough? Because what is going to happen when everybody in the room looks alike and behaves the same way? And of course, this applies to gender as well as race, religion, politics.

We are at a time when we are training machines to think like us. What a scary thought when it comes to women in business. What will happen when machines consider physical and psychological traits based on the beliefs that dominate society today? What if men, who claim they did not know it is not normal to make advances in work situations train computers to think it is normal too? Will women be negated roles a priory based on the belief that “it’s much more likely to be more talking” if too many women are part of the board? Are we really building a better society if we move from paying a woman by the hour for sexual favors to buying an AI-enabled doll that will respond to its master just the way a male engineer has designed it? What will happen if self-driving cars are taught that a woman is more dispensable than a man when it comes to life and death situations?

We can rejoice at having female emojis with more professions and we should. We should continue to foster STEM among female students but know that just because they can do the job it does not mean they will be given the opportunity to do it. Let’s lean on the strong female role models we have. Let’s be supportive. Let’s have each other’s back. A smart woman said recently that we should not just be happy to be in the room where it happens. We should be sitting at the table and make it happen. So, let’s do that, let’s stop thinking it is us, let’s stop thinking it is normal and let’s get a seat at the table.

Digital Assistants’ Adoption: a Marathon not a Sprint!

What a difference a year makes! Usually a statement we can make when looking at technology adoption. Either because in a year a technology is history or because it has become a vital part of our life. Sadly, when it comes to digital assistants and the interactions consumers are having with them, a year has not made much of a difference at all.

In June 2017, we at Creative Strategies surveyed 1100 US consumers between the age of 18 and 65 and found that penetration when it comes to digital assistants has not grown since our previous survey conducted in May 2016: only 66% of consumers are using one. Among the users 63% use Apple’s Siri regularly, 23% use Google Assistant, and 10% uses Amazon’s Alexa. As to be expected, usage is proportional to the technology savviness of the users so among early tech adopters usage is at 96% and among early mainstream users, it reaches 76%.

The Chicken and Egg Problem

The industry is obsessed about determining who is ahead in artificial intelligence and whose assistant is smarter. Consumers, however, do not seem to be asking much of today’s digital assistants.

Alexa reached 15,000 skills just the other day, and Google Assistant and Siri have been growing in the range of tasks they can perform. Consumers are turning to them to ask the same things as they did last year: searching the internet, setting alarms, playing songs, asking directions and checking the news. What is encouraging, however, is that while searching the internet is still the primary task, all the others have grown in popularity compared to a year ago showing that consumer confidence might be growing.

When I start to dissect the data, however, I do wonder if consumers play it safe and ask each assistant what is a core competence of the supplier or fits the primary use case. So, Google Assistant is used for search and navigation, Alexa is asked to set alarms and play songs, and Siri is asked for directions and to call someone or set a timer.

Still being at the early stages of these relationships, looking for something that we know will end up in a positive exchange is natural. It is also good for confidence building to ask for tasks that we are confident our assistant will get right. It does, however, raise the question of how this is impacting new feature/skill discovery and adoption in the long run and consequently how it will impact the value that brands will see in the return of investment they are making in digital assistants.

The Value Proposition Problem

Consumers who are not using a digital assistant say that they are either more comfortable typing (33% – rejecting voice-first rather than the assistant) or they said they tried a few times but had to repeat themselves (20%) and did not use it again. Force of habit is also damaging digital assistants, as 19% of consumers forget they can use voice even if they know they can. Interestingly, some consumers also think we are getting too lazy while some say they cannot get themselves to actually learn how to use a digital assistant.

Consumers who do use digital assistants regularly seems to be pretty satisfied with their performance with Google Assistant showing the highest percentage of very satisfied users.

Siri users are trailing both Google Assistant and Amazon Alexa when it comes to very satisfied users which to me speaks to why Apple’s decision to make HomePod about music and sound first was the right thing to do.

I have discussed before how I expect Apple to pivot with Siri when HomePod gets to market. We can debate whether or not Siri is lagging in capability compared to Google Assistant and Alexa. What matters is that non-users might think so and current users are not as satisfied as they could be. Apple coming out at WWDC and promising a better Siri for HomePod would not have helped the situation.

28% of consumers we surveyed are interested in a HomePod, and another 16% said they are planning to buy one. These numbers go up considerably with early adopters where 60% say they are very interested and 42% are planning to buy one.

46% of current Siri users are interested in HomePod, and another 29% are planning to buy. Even across consumers in our survey who were unsatisfied with Siri, interest in HomePod is as high as 43% with another 30% who are planning to buy. What grows in this segment is the need to be convinced consumers that Siri will be better. Convinced not promised! 34% said they need to be convinced that Siri has improved before they consider buying HomePod. This number declines to 17% among overall Siri users.

Price is the biggest deterrent as 17% of consumers say HomePod is too expensive. Consumers are not looking at the competition as only 4% would buy and Amazon Echo or a Google Home. Owning a competing device is also not stopping consumers from being interested in HomePod only 4% and 3% said their lack of interest in HomePod is rooted in their contentment level with Amazon Echo and Google Home.

Apple will still have to deliver on Siri, not for HomePod’s sake but for Apple’s sake but promising and delivering on what consumers can understand and easily assess is the same smart move they made with Apple Watch and Fitness. The value that the rest of the device will bring to users will be personal and will grow over time. This strategy is paying off with Watch, and it will do the same with HomePod.

iPad Pro: You do You!

If you insist on looking at iOS 11+iPad Pro=PC you might miss the opportunity for this combo to live up to its full potential. I know for many PCs and Macs are synonymous of work and productivity, therefore my suggestion to start looking at the iPad Pro differently is missed on them. Yet, I promise you, there is a difference between wanting to replicate what you have been doing on a PC and wanting to understand if the iPad Pro can fit your workflow or even if it could help your workflow change to better fit your needs.

I have been using a 9.7” iPad Pro as my main “out of office” device since its launch. I do everything I do on my Mac or PC and some things are easier and some things are a little more painful but by and large, it serves me well.

I upgraded to a 10.5” iPad a couple of weeks ago and it has been business as usual. I enjoy the extra screen real estate and I struggled a little to get adjusted to the larger keyboard as my fingers had a lot of muscle memory in them that was generating typos. I did not use Pencil more despite the fact that, thanks to the new sleeve I was not forgetting it at home as often as I used to.

After 24 hours with iOS 11 and iPad Pro, it became apparent to me that the range of things I could do grew and so did the depth I could reach. These are not necessarily tasks I was performing on my Mac or PC and when they are, they are implemented in a slightly different way as the premise on iPad is touch first.

Let’s take a Step Back

Before I moved to the iPad Pro I had to embrace the cloud. This step was crucial in empowering me to use the best device for the job at any given time. When I travel, mobility trumps everything else. Going through a little pain that a smaller screen and keyboard imply is well worth the advantage of cellular connectivity, instant on, all-day battery and the ability to dump all in one purse.

What does a normal day at the office entail for me? Well usually I engage in most of the following: reading articles, reports, papers and books, writing, social media interactions, listening and recording podcasts, email, messaging, data analysis and creating or reviewing presentations.

I could perform all of those tasks on an iPad Pro as well as on a MacBook and a PC. What differed is which task was best executed on each device. Anything touch first was better on my iPad Pro or on my Surface Pro as was anything that supported pencil or inking. The MacBook Pro and Surface were slightly better with Office apps but mainly because of the larger screen and the better keyboard. The iPad Pro still offered a better balance of work and play thanks to the larger ecosystem and better apps and partly because Surface is held back by a Windows 10 jargon that makes it walk and talk too much like a PC.

iOS 11 Brings Richness to the iPad Pro

This is not a review of iOS 11 so I will not list all that is new with it but I will point to the features and capabilities that iOS 11 offers that struck me as changing the way I work.

Files
Adds freedom to my cloud-first workflow allowing me to live in a multi-cloud provider environment which was possible before but not without any pain.

The New Dock, Slide Over, and Split View
Make for a faster, richer multitasking environment that you appreciate when you always must have your eye on social media or you are creating charts or sieving through data when writing a report.

Drag and Drop
This is possibly the best example of a feature that despite sharing the same name on the Mac is made zillion times better by touch. It turns something that is cumbersome to do with the mouse in something super intuitive.

Instant Markup, Instant Notes, Scan and Sign
Despite still preferring the Surface Pen I finally see myself being able to integrate Pencil in my workflow. I read many reports and I used to print them out and annotate them, highlight them and then take pictures of them so that I would not file them somewhere safe and never see them again. All those steps are now condensed for a much more efficient and equally productive experience. In this case, it is not about being able to do something I was doing on my Mac. It is, instead, the ability to fully digitize a workflow. It also allows Apple to catch up with inking on Surface – and I specify Surface as I have not found another Windows 10 2in1 that offers the same richness of experience.

QuickType Keyboard
I am still not a fan of the physical iPad Pro keyboard. I do not like the texture of the material and the lack of backlight limits the usefulness on planes and in bed, sadly two places where I often work! Because of that, my default has always been the digital keyboard and the new update makes it a breeze to touch type on iPad Pro both for speed and accuracy.

Screen Capture and Screen Record
We have already started to experiment with the video record function to share an interesting chart with some live commentary. This is something we could have done in the past but required specialized apps and a pretty convoluted process. Screen Capture coupled with Instant Markup also offers a new way to interact with content that Samsung Galaxy Note users have already been addicted to.

All new features that will make my work on iPad not just more efficient but more pleasant because it better fits me. I am sure this will result in more time spent on iPad when I am in the office not just when I am on the go.

The Best Tool for the Job

If you think about other technologies innovations there is always a degree of compromise at least for a given period. Think about the clarity of a voice call on a fixed phone vs. a DECT phone and then a cellular one. We were happy to sacrifice quality of sound for the freedom to walk around the home first and always have a phone with us later on. The same can be said about fixed broadband vs. mobile broadband.

I feel though that with computing we are getting to the point of not having to compromise as long as we do not let habits hold us back and we feel empowered to reinvent our workflows. Millennials and Gen Z have the advantage of not suffering from the limitations muscle memory imposes, but there is hope for us old dogs too.

Worrying about installed base has destroyed companies such as Nokia and Blackberry and has held back Microsoft. Apple has had the advantage to have a very loyal and forgiving installed base of users who gave them the benefit of the doubt when they started to experiment with computing. Microsoft has stepped up and with Surface has been able to deliver a richer experience that comes from a deeper integration of software, hardware, and apps. Yet, while the goal of these two giants seems to be very much aligned, I cannot help but wonder if Microsoft decision of going all in with Windows 10 will always hold them back somewhat vs. an Apple that has chosen a two-pronged approach to computing.

For me Surface Pro today is the best productivity device on the market but it’s being held back to be a true creativity device by Windows 10. As a user who wants to both be creative and be productive iPad Pro is the choice for me today. I am, however, keeping my eyes on Windows 10 S as Microsoft’s opportunity to create a two-pronged strategy that frees them of the legacy ball and chain.

Retailers Play a Key Role in the Success of Smart Homes

Connected home products are grabbing floor space and early tech adopters’ attention. Sales are growing, and big brands are investing more and more. But moving from early tech adopters to the mainstream will not just be about lower prices. A better shopping experience is a must when consumers are still confused about what works with what and the overall benefits of a connected home.

Tech savvy consumers know what they want. They have researched the product category, they read tech reviews, they asked friends, and they are happy to purchase online. Tech-savvy buyers are also glad to go through any pain the set-up of a device might bring. They see the pain as part of the process of being early tech users. It’s their duty to pave the way for the masses.

Mainstream consumers, on the other hand, want a pain-free setup and most of all a worry-free purchase experience. In our research into early connected home adoption, mainstream consumers expressed the need to have someone to go to in a store and the peace of mind that if something went wrong, they can bring the device back to the store and talk to a human. In our focus groups, consumers seemed to prefer home-improvement stores to electronic stores mainly because that is how they see these connected devices. A connected bulb is still a bulb!

Sadly though, if you go to a Home Depot or Lowes you are left facing a bunch of connected products lump together on a shelf with very little information on what they do let alone of the experience they can deliver.

It’s about the Experience, not the Specs

Whenever I play mystery shopper, I am faced with a high degree of ignorance on the topic of smart accessories. Most sales assistants know about specs and what is spelled out on the box, but unless you have someone who went through their own set up at home, it is rare to talk about an experience. Yet, I find that when you can envision what a particular device can do for you the sale is much easier.

Last week I moderated a panel on ambient computing at the Target Open House in San Francisco, and I was pleased to see how it had evolved since I first visited it after its grand opening over a year ago. The space gives the ability to potential buyers to see products in a large room called the Playground as well as walk through a living room, a bedroom and a garage to experience some of these in a home context. Target has 500 stores across the country that have smart products displayed in context.

While, as you can expect, the experience is still quite show-roomie, it does attempt to deliver an experience. What I liked is how Target focused on guests’ personality and preferences rather than the products. So, for instance, if you are a sports enthusiast or a music lover they show how your living room can be optimized for your ultimate viewing or listening experience. I think this is interesting as it attempts to put the consumer first rather than the product. In other words, it is about helping you find the products that deliver what you want instead of telling you about products and let you discover how they fit into your life once you get them home.

A few months ago, I spent some time in a model home that was installed with HomeKit compatible products. Needless to say, the experience was pretty compelling as there is nothing more convincing than sitting on what could be your own sofa and open the door to a guest, lower the blinds to have the perfect light to watch TV and set the temperature in the room. Over time this will become the norm for buyers of new homes. I expect you will be able to pick a Siri, Google Assistant, Cortana and Alexa home. For now, however, not every vendor in the market can have a real life home to welcome potential buyers, so store experiences are important. Your average consumer is also not necessarily going to attend a home show where many of these solutions have been showcased this far.

Interestingly, setting up experience rooms is how large TV and projectors are displayed in electronic stores. If you walk into a BestBuy you will quickly find the room with the cinema chairs and the projector or the large screen TV that disappear behind a portrait above the fireplace in front of the couch or the speakers that are disguised as rocks for your patio. Showcasing video and audio solutions in a real-life setting has been done for years, yet showcasing a connected home is not something that retailers are rushing into and I think it is because the opportunity is more limited for them at least for now. It might all boil down to how many connected devices will I need to sell to equal the sale of a $7000 video projector?

Smart Experience Showcases Can Help Retailers and Brands

In this early stage of the connected home, it is not just consumers who need help in buying. Brands too need help in selling. Information on what message resonates with consumers, what features close the deal, what is the job to be done…Retailers can help with that information when they set up a smart environment. Target Open House, for instance, has sensors that connect information on foot traffic, product views and likes, touches on digital screens. Information is collected about sales and direct feedback shared with the team of experts who work in the house, and the information is used to decide what products should be displayed in the Playground area as well as what may make sense to sell at Target stores nationwide. Some of the insights are also shared with companies on the shelf to help them understand how guests are experiencing their products.

Big data is such a trend in tech right now that retailers should start talking more about what kind of data they are prepared to share with brands. This can be a competitive advantage in securing product exclusives, and co-marketing spend.

A Platform for Smaller Brands

The connected home space is benefitting from Amazon, Google and Microsoft, opening up Alexa, Google Assistant and Cortana respectively to be integrated into different ways into apps and devices. While Apps have an easy go to market through apps stores, most device manufacturers still need a distribution channel that is online or in store. Kickstarter and Indigogo can help startups to get to market but once there getting noticed might be harder than they thought.

Target Open House offers startups a stage through their Garage space where a dozen of products at the time are showcased before they get to market. Some of the products that guests are particularly excited about and offer a somewhat unique proposition are then moved to the Playground area and on Target’s shelves.

Other stores should follow in the steps of Target and offer a stage for startups especially if local. A community-feel always speaks to consumers, look at how popular farmers market and farm to table restaurants are!

A Connected Home is not built on One Device Alone

Connected homes in their true sense of home automation are complicated concepts that will take years to develop fully. They are also going to be quite different from one home to another. Some consumers might like to be in a single brand home; others will like to pick best of breed brands in the many areas they will decide to connect. Experiencing that home will matter to all but especially to the ones who will pick and mix. This is why experiencing the best way one can, what how technology changes your home is important. While consumers today think about it regarding home improvement I believe that home design will also play a key role in shaping the connected home. Maybe over time Pottery Barn rather than Home Depot is where consumers will turn.

Five Internet Companies That Need Better Consumer-Facing Customer Service

A year ago, when Google announced an aggressive push into the consumer hardware business, I wrote that the company needs a better consumer-facing customer service infrastructure. The column was published in Recode and received quite a bit of attention.

I’ve been thinking about some other consumer-oriented Internet companies and brands that also need to improve their customer service. My bias is toward actually being able to talk to a human being, in real-time, by phone or via live chat. Because sometimes, in certain situations, the miasma of e-mail, help forums, Zendesk and the like, just doesn’t cut it. A common approach of many Internet companies is to shift the burden of customer support to the customers themselves, which means that Mary from Kentucky might be telling you how to connect your bank to Mint.

Companies that do this well — Amazon, Apple, Netflix, and even some of the cellular operators such as T-Mobile — have higher levels of customer satisfaction and loyalty. Some have shown marked improvement (Dell, Microsoft), while others, such as some of the airlines, have started to use Twitter fairly effectively, especially during times of high call volume.

So, here are five companies in the B2C realm that need to make improvements in their customer service infrastructure.

Mint. If you read the help forums, the tens of millions of people who use this web-based personal finance management service have a love-hate relationship with the company. Mint has email and the Zendeskian support site, but there is no way to actually talk to a human being at Mint. The types of problems and questions that can come up – bank can’t connect, wacky duplicate entries, transactions that suddenly get lost – require an immediate and often quick discussion and not the multi-threaded email that can stretch out over several days. Curiously Mint is owned by Intuit (Quicken, TurboTax), so there’s no shortage of customer support infrastructure there. Perhaps they can dispatch some of that army of folks who staff the support lines at TurboTax during the “off-season”!

Uber and Lyft. If you ever have a problem with one of these popular ride-sharing services, you might be wistful for that cranky local taxi dispatcher you used to call when the cab didn’t show up. Because unless it’s a real emergency, there is basically no way to contact a human customer support person at Uber or Lyft. If one uses these services with some frequency, there will inevitably, at some point, be an issue with an incorrect fare, being charged for a canceled trip, etc. If there’s ever an actual dispute, web/email is the only recourse – some nameless person (maybe even a robot?) is judge and jury, and there’s little opportunity for any back and forth. There are some situations where one needs to be able to talk to a person to provide some background and context. Uber and Lyft should do better here.

Airbnb. Sensing a theme? The vaunted ‘sharing economy’ operates lean and mean when it comes to customer support. Now, it is possible to contact Airbnb when there’s an emergency. But if there are any other issues or questions, as a guest or a host, there are lots of hoops to jump through in order to talk to a person. Airbnb does have a number to call, but it is hard to find on their website. My personal experience has been that hold times can be very long, with customer support generally outside the U.S. and reps not adequately trained or equipped to deal with contextual situations. This isn’t like calling your cable company to do a modem reset; each situation is unique.

Airbnb handles some 500,000 stays daily…situations are bound to come up. Even though @AirbnbHelp can be very effective, when one is in a foreign place, it would be good to know that there’s an ability to call a person at AirBnB to get help, real time.

Another frustration is that Airbnb does not provide the ability the ability for a guest to contact a host until a reservation is actually booked, other than through its internal messaging system. Again, there are situations and contexts during the ‘reservation inquiry phase’ where electronic, asynchronous communication just doesn’t cut it. Airbnb has said they withhold contact information due to privacy concerns, but I’d imagine that another reason is AirBnB doesn’t want the guest/host to ‘go around’ its system in order to avoid fees. If a host is willing to provide their phone number to a potential guest, shouldn’t they be able to?

LinkedIn. This is a bit more of a B2B site, but still, I think that the issue of customer support still applies. LinkedIn does not offer any phone-based support, and chat support is uneven and unpredictable. E-mail support is through the dreaded “web form, with drop down options”, which, again, put the onus on the customer and lacks the ability to provide context. Now, the issues might not be of the ‘urgent’ B2C variety as with Uber or AirBnB, but LinkedIn is a large and fairly complex site, and getting any help figuring out how to best use LinkedIn or answering FAQs can be an unwieldy and time-consuming process.

Facebook. Whether it’s help using the site, posting an ad, or dealing with a more urgent issue such as customer privacy or an emergency type situation, it is difficult, if not impossible, to talk to a human being at Facebook. The company has a very extensive Help Center, with literally hundreds of forms, and a very active Facebook community. And I understand that with some 2 billion users across many types of services, high-touch customer support might be a huge challenge to undertake. But there are a few types of situations, specifically with regard to privacy, or other types of emergencies, where it would be good to know that one can get help from someone at Facebook, and quickly. I did a little research, and found some situations where, for example, a Facebook user was reporting unauthorized usage use of their child in photos, and they were told to ‘fill out a form’ by someone on the ‘Facebook Help Team’. Not very reassuring.

Now, folks might complain about the high cost of cellular or cable service, but at least you can call them for tech support…or argue about a bill!

A Demo is not a Product

Many of us immersed in the world of consumer tech become quite excited when we see something new for the first time. Our imagination immediately races ahead to try to understand how we’ll use it and what products we’ll buy.

But our imagination rarely is tempered by the actual time it can take to turn a new technology into a product. We get ahead of ourselves with predictions about the impact that the technology will have and how it will change our lives. But from all my experience, it always takes much longer than expected.

Our excitement often leads to unreasonable expectations, impatience, and disappointment once the product finally arrives. The product is often less than we expected, and it may take several iterations before it does.

The time it takes for a new technology concept to become mainstream is measured in years or decades, rarely in months. Many things need to occur. There’s the time needed for development of the product, the time it takes to create awareness in the market, and the time for people to realize they have a need. Even then, a buying decision can take years more.

The world is not composed of people like us that are early adopters and can’t wait for the next new thing.  Most can wait and usually do. Sometimes years. There are many reasons for this, from not understanding the new technology, being cautious and skeptical about its value to them, being intimidated, not being able to afford it, or just not caring.

The table below shows just how long it took with other products.  The CD player and VCR, as examples, took ten years to reach a fifty percent penetration of US homes.

We can argue that with social media, the speed of information, and a technically more proficient population, adoption might move faster today. But our expectations are now higher, we’re more skeptical, and it often takes more to impress.

That hasn’t stopped companies’ efforts to get us excited about their new tech. We’re being inundated with news every day. Examples are self-driving cars – even some that fly, artificial intelligence, and virtual and augmented reality.

Much of the news is promoted by the companies themselves to raise investments, increase their valuation, or to scare away their competitors, all while exaggerating the time to commercialization

Just last week Uber announced an investment in a company developing flying cars. It played well on the national news that quoted a company official that they roll out a network of flying cars in Dallas by 2020. Last year Uber said they were already employing self-driving cars, when, in fact, they still have one or two employees in each car. Two years ago, Amazon demonstrated drone delivery. Yet these technologies are still years away.

Today it’s hard to open Facebook or a technology blog without seeing examples of virtual and augmented reality. We’re seeing demos from scores of companies around the world, each vying for moments of fame. We see all sorts of clever uses of how this technology will help us in education, medicine, shopping, and computing as if it’s just around the corner. Yet much of this will evolve slowly and take years to be significant.

If the past is any indication, the first-generation products will not be commercially successful, but more of a proof of concept. No one will wear huge goggles outside of their home. Enabling technologies still need to be developed, including smaller components, miniaturized optics, and faster processors to enable these devices to be practical. More importantly, new tools and an infrastructure are needed for creating affordable content.

Yes, we’ll see some small examples when we point our phone at a restaurant or a product and see reviews and can buy with a click. Tim Bajarin correctly pointed out in this piece that he doesn’t expect to see VR adopted widely for at least 5-7 years.

The point here is not to be discouraging about innovation, but to realize that it’s a long and difficult road from a prototype or demo to a successful product. The idea is always the easy part.

Apple watchOS 4 brings Intelligence to the Wrist

There was a lot unveiled during the Apple WWDC keynote last week and, as to be expected, some of the hotter and bigger products stole the limelight and relegated others to be simply an extra in the over two-hour-long production. watchOS 4 might not have seemed significant, especially to those who have been so eagerly calling Apple Watch a failure, but I saw it as one of the best examples of how Apple sees the future.

The wearable market remains a challenging market for most vendors. According to IDC, sales in the first quarter of 2017 saw Apple and Xiaomi sharing the number one position with volumes of 3.6 million units. While volumes are the same, it is when you look at average selling price (ASP) for these two brands that the real issue with the wearable markets surfaces. Apple controls the high-end of the market and Xiaomi the lower end. In between, Fitbit is losing ground and failing to move ASP up.

Delivering a clear value continues to be key in convincing consumers that wearables have a role to play and for now that value for mainstream consumers remain health and fitness.

There is More Value in a Coach than a Tracker

Since Apple Watch 2, Apple has been focusing on fitness and the release of watchOS 4 builds on it by adding to the Workout app support for the highly popular High-Intensity Interval Training, an autoset for pool swim workouts and the ability to switch and combine multiple workout types.

Apple is also attempting to turn Apple Watch into more of an active coach than a simple tracker. This might seem like a subtle differentiation, but if implemented right it could actually drive engagement and loyalty. Tracking, while clearly useful, has more a passive role and one that some users might think could be taken on by other devices. Turning Apple Watch more into a coach through daily inspiration, evening push and monthly challenges deepens the relationship a user has with the device. Delivering suggestions on how to close the circles, praising the goals achieved thus far and pushing to achieve more can make users feel that Apple Watch is more an active driver of their success which in turns increases the value they see in it.

The new GymKit which allows gym equipment to sync with Apple Watch might take a while to materialize given the required updated hardware roll out by key brands such as LifeFitness, TechnoGym StairMaster, etc. but it makes sure Apple is not losing sight of critical data. Today, some users might just rely on the gym equipment rather than their Apple Watch due to the duplication of functionalities which leaves Apple Watch missing out on valuable data to which Apple and other apps could otherwise have access to. GymKit puts Apple Watch right at the center of our fitness regime. Apple Watch talking to gym equipment via NFC also makes me believe that more devices will come in the future. Think about having your gym membership card or your hotel room card on your watch rather than having to carry a physical card.

Reinforcing the Strong Pairing of Apple Watch + AirPods

I talked about the magic that Apple Watch + AirPods can deliver to users before and I remain a believer. In a similar way to HomePod, music on the Apple Watch is the easiest way to appreciate Siri as well as the combo with AirPods. With watchOS 4, Apple is making it simpler to get to the music you want for your workout thanks to a new multi-playlist support and automatic import.

Apple also introduced the new Siri face that makes Apple Watch much more context-aware by delivering information that is relevant to you at a specific moment in time. While Apple did not talk about it, one could see how that Siri Watch face could integrate very well with voice when you are wearing AirPods. Siri could, for instance, tell you that you need to leave for your meeting while showing you the calendar appointment on Apple Watch.

So, as Apple Watch becomes more like a coach, Siri becomes more a visible but discreet assistant that is being liberated from the iPhone. I think this is a very powerful paradigm and before nay-sayers jump to point out that Apple Watch penetration is limited, I underline that Apple Watch users are highly engaged in the Apple ecosystem and represent Siri’s best opportunity. Similar to CarPlay, Apple Watch also has a captivated audience not just for Siri’s brains but also for voice-first. With Apple Watch, voice interaction is the most natural form of interaction, especially when wearing AirPods. So much so that, with watchOS 4, SiriKit adds support for apps that are used to take notes, so that now you can use Siri on Apple Watch to make changes in any note-taking app.

Smarter Watch, Smarter Apps

Some Apple Watch critics have used the news that circulated last month that Google, Amazon, and eBay were killing support of their Apple Watch apps as evidence that Apple Watch failed. The reality is, however, as I explained numerous times, that Apple Watch cannot be seen as an iPhone on your wrist and therefore its success will not be driven nor defined by the same enablers.

Don’t get me wrong, there is certainly a place for apps to play, but context is going to be much more important than it has been so far on the iPhone or the iPad. This is why I believe Apple’s latest watchOS will help in making apps not just faster and smoother to run but much more relevant for users.

First, there will be a single process that runs the app’s UI elements and code. This helps with speed and responsiveness and means developers do not need to change their code. Access to Core Bluetooth will allow apps to bypass the iPhone and connect directly to Apple Watch so that data is transmitted faster between Apple Watch and an accessory for instance. Apple also increased the number of app categories that can run in background mode like for example, navigation apps.

While it will be up to developers to think differently when it comes to delivering apps for Apple Watch, I believe Apple has given them a much easier tool set to succeed.

Apple Watch and its Role in Ambient Computing

HomePod was the sexy hot product that everybody paid attention to and ambient computing is the buzzword of choice at the moment. Both extremely relevant in how one should think about home computing and even office computing, to be honest. It is easy for me to see the role that Apple Watch can play in helping me navigate my ambient computing network in a personal and highly relevant way. It is early days, but Apple has laid the foundation!

AMD and Intel Race Towards High Core Count CPU Future

As we prepare for a surprisingly robust summer season of new hardware technologies to be released to the consumer, both Intel and AMD have moved in a direction that both seems inevitable and wildly premature. The announcement and pending introduction of high core count processors, those with many cores that share each company’s most modern architecture and design, brings with it an interesting combination of opportunity and discussion. First and foremost, is there a legitimate need for this type of computing horsepower, in this form factor, and secondly, is this something that consumers will want to purchase?

To be clear, massive core count CPUs have existed for some time but in the server and enterprise markets. Intel’s Xeon line of products have breached the 20-core count in previous generations and if you want to dive into Xeon Phi, a chip that uses older, smaller cores, you will find options with over 70 cores. Important for applications that require a significant amount of multi-threading or virtualization, these were expensive. Very expensive – crossing into the $9000 mark.

What Intel and AMD have begun is a move to bring these high core count products to consumers at more reasonable price points. AMD announced Threadripper as part of its Ryzen brand at its financial analyst day, with core counts as high as 16 and thread counts of 32 thanks to SMT. Then at Computex in Taipei, Intel one-upped AMD with its intent to bring an 18-core/36-thread Skylake-X CPU to the new Core i9 lineup. Both are drastic increases over the current consumer landscape that previously capped out at 10-cores for Intel and 8-cores for AMD.

Let’s first address the need for such a product in the world of computing today. There are many workloads that benefit easily from multi-threading and consumers and prosumers that focus in areas of video production, 3D rendering/modeling, and virtualization will find single socket designs with 16 or 18 cores improve performance and scalability without forcing a move to a rackmount server infrastructure. Video encoding and transcoding has long become the flagship workload to demonstrate the power of many-core processors. AMD used that, along with 3D rendering workloads in applications like Blender, to demonstrate the advantages of its 8-core Ryzen 7 processors in the build up to their release.

Other workloads like general productivity applications, audio development, and even PC gaming, are impacted less by the massive core quantity increases. And in fact, any application that is heavily dependent on single threaded performance may see a decrease in overall performance on these processors as Intel and AMD adjust clock speeds down to fit these new parts into some semblance of a reasonable TDP.

The truth is that hardware and software are constantly in a circular pattern of development – one cannot be fully utilized without the other. For many years, consumer processors were stuck mostly in a quad-core rut, after an accelerated move to it from the single core architecture days. The lack of higher core count processors let software developers get lazy with code and design, letting the operating system handle the majority of threading operations. Once many-core designs are the norm, we should see software evolve to take advantage of it, much as we do in the graphics market with higher performance GPUs pushing designers forward. This will lead to better utilization of the hardware being released this year and pave the road for better optimization for all application types and workloads.

From a production standpoint Intel has the upper hand, should it chose to utilize it. With a library of Xeon parts built for enterprise markets already slated for release this year and in 2018, the company could easily bring those parts to consumers as part of the X299 platform rollout. Pre-built, pre-designed and pre-validated, the Xeon family were already being cannibalized for high-end consumer processors in previous generations, but Intel capped its migration in order to preserve the higher prices and margins of the Xeon portfolio. Even at $1700 for the 10-core 6950X processor, Intel was discounting dramatically compared to the Xeon counterpart.

Similarly, AMD is utilizing its EPYC server product line for the Threadripper processors targeting the high-end consumer market. But, AMD doesn’t have large market share of workstation or server customers to be concerned about cannibalization. To them, a sale is a sale, and any Ryzen or Threadripper or EPYC sold is an improvement to the company’s bottom line. It would surprise no one if AMD again took an aggressive stance on pricing its many-core consumer processors, allowing the workstation and consumer markets to blend at the top. Gaining market share has taken precedent over margins for AMD; it started as the initiative for the Polaris GPU architecture and I think it continues with Threadripper.

These platforms will need to prove their value in the face of dramatic platform requirements. Both processor vendors are going to ship the top performing parts with a 165-watt TDP, nearly double that of the Ryzen and Kaby Lake desktop designs in the mainstream market. This requires added complexity for cooling and power delivery on the motherboard. Intel has muddied the waters on its offering by varying the number of PCI Express lanes available and offering a particular set of processors with just four cores, half the memory channels and 16 lanes of PCIe, forcing platforms into convoluted solutions. AMD announced last week that all Threadripper processors would have the same 64 lanes of PCIe and quad-channel memory support, simplifying the infrastructure.

With that knowledge and assumption in place, is higher core count processing something that the consumer has been asking for? Is it just a solution without a problem? The truth is that desktop computers (and notebooks by association) have been stuck at 4-cores in the mainstream markets for several years, and some would argue artificially so. Intel, without provocation from competent competing hardware from AMD, has seen little reason to lower margins at the expense of added performance and capability in its Core line. Even the HEDT market, commonly referred to as the E-series (Broadwell-E, Ivy Bridge-E and now Skylake-X) was stagnant at 8-cores for longer than was likely necessary. The 10-core option Intel released last year seemed like an empty response, criticized as much for its price ($1700) then praised for its multi-threaded performance.

AMD saw the opportunity and released Ryzen 7 to the market this year, at mainstream prices, with double the core count of Intel Core parts in the sub-$400 category. The result has been a waterfall of an effect that leads to where we are today.

Yes, consumers have been asking for higher core processors at lower prices than they are currently available. Now it seems they will have them, from both Intel and AMD. But pricing and performance will have the final say on which product line garners the most attention.

Apple HomePod: A Speaker with the Bonus of Siri

On Monday, the most awaited and rumored device of Apple’s developer conference was finally announced as the last one thing of an over two-hour long keynote: HomePod.

A little later in the day, in a room that is probably as large as my family room at home, I had the opportunity to listen to HomePod and compare its performance to an Amazon Echo and a Sonos Play 3. I listened to five songs across the three devices: Sia’s “The Greatest,” “Sunrise” by Norah Jones, “Superstition” by Stevie Wonder, “DNA” by Kendrick Lamar and a live performance of The Eagles’ “Hotel California.” The sound coming from HomePod was crisper and the vocals clearer than the Sonos. The comparison with Echo was the harsher of the two. No matter where I stood in the room, the music sounded great. What I did not get to do was talk to Siri! Even the demo was run from an iPad which would imply there is Bluetooth support with HomePod.

The Advantage of Going Music First

On Stage, Phil Schiller said the HomePod will do to home music what iPod did to music overall. The iPod, of course, did a lot to music from a business model perspective but I do not think this is what Schiller was getting at. I believe the ‘reinventing home music’ comment is actually closer to what AirPods have done for wireless headsets. They created a more magical experience from pairing your phone, all the way to listening to music. HomePod delivers good quality sound without the added complexity of having to figure out where to position multiple speakers in a room to achieve that sound. The fact that HomePod understands where it is positioned in the room and whether or not it is paired with another HomePod so that the way the music is played changes dynamically takes all the burden away from the user.

By focusing on music first, Apple straight away opens up the addressable market to a much broader segment than what a smart speaker would do. There are more people our there interested in buying or replacing their speakers that care about good sound quality than there are wanting a smart speaker that delivers ok sound.

While early tech adopters might find it easy to invest in a speaker to get access to an assistant the price that they are willing to pay for it has been set by Amazon and Google and so far it has not gone past $249. Beyond early adopters justifying the investment gets a little more tricky if the core value is put on the assistant. Nobody would question quality sound, however. And even if the assistant turns out not to be that key for you, you would not be regretting the purchase. That is a smart move when you think that Apple not only knows music but knows hardware.

Siri as a Specialist to Build Trust

During the keynote, Apple was much more intentional with how it described Siri beyond voice. As the different presenters talked about Machine Learning and Artificial Intelligence Siri clearly emerged as a brain, not just a voice.

When it comes to HomePod, Siri becomes a musicologist that will be able to understand my music taste and preferences and deliver the perfect playlist just when I ask “Siri play some music I like.” Determine what music to play in relation to taste, possibly mood and time of the day does not seem particularly difficult which would give Siri a high chance to get it right. This accuracy is going to build confidence in the user who will likely increase usage and trust Siri for other things over time.

Opening too much too soon when it comes to APIs, however, could spoil that experience and Apple is not willing to take that risks. The number of things you can do with an intelligent speaker or any device that is linked to a digital assistant is not, in my opinion, what truly matters.

Alexa has over 11 thousand skills but how many are regularly used in a way that makes an impact on the user life. In a way, skills are the new apps. The number game works for a while but what it will boil down to is what skills hook me on the device. Everybody is going to be a little different. For me, my Alexa morning briefings and traffic alerts have become a part of my morning routine.

The number of devices that will be able to integrate an assistant is also not the most important thing in the overall experience. Just because you can integrate an assistant in a fridge or a washing machine it does not mean that you should. Voice UI and assistant are two different things. Will I want to control my washing machine with voice? Sure. Will I want for my assistant to be in my washing machine? No!

Curating the experience so early in the game is important. Our data shows that consumers who tried a digital assistant a few times and did not get the answer or the task they wanted gave up and never tried again. Getting disappointed users to try again is harder than getting consumers to try in the first place.

Don’t draw Conclusions on Why We did not get to interact with Siri

Apple said that HomePod is a hub to control Home when out and about. It also said that Siri in HomePod can do the same things “she” does in the iPhone or the iPad. There are a lot of questions that do not have answers today: will HomePod be able to recognize and differentiate users when it is associated with a Music family account? Will HomePod connect to my Apple TV? Will I be able to stream other music services other than Apple Music? Will there be a developer kit?

When it comes to Siri, I would urge not to conclude that the Siri we know today will be the same as what we will discover once HomePod hits the market. We are aware that iOS 11 will bring enhancements to Siri. Aside from a new voice, and more context that will be used to suggest answers to follow-up questions, Siri will now support translation from English into Chinese, German, Italian, French and Spanish with more to come later. Siri will also be able to provide bank account summaries, balance transfers and support third-party note-taking.

I believe that the reason why we did not get to interact with Siri as part of the demo is that the experience will be very different, but there is, of course, more work to be done otherwise HomePod will be shipping now! By letting HomePod out of the bag, Apple made sure that people in the market for a speaker did not rush to get what is available on the market today.

Slow and Steady wins the Race

As much as we like to talk about who is ahead and who is behind, the reality is that the smart speaker and digital assistant market are still at the start of a long opportunity and Apple is still right in the game. While Siri might not come across as smart as Alexa and Google Assistant “she” has been learning consumers’ preferences, habits, behaviors for years now and doing that across over thirty countries albeit with different skills. Apple mentioned that Siri is used monthly on 375 million devices. This reach is a significant advantage and maybe the primary source of the discontent that some feel right now. With Siri having this kind of advantage why are we not seeing more? Well, I think we are about to see more!

Are We Wrong about the Future of Urban Commuting?

Over the past few months, there has been a lot of talk about the future of transportation; from the car as a service to self-driving cars. We seem to be expecting many changes that will redefine how we get from point A to point B.

Ride-share companies Uber and Lyft have grown in popularity capturing consumer dollars and press attention, although not always for the right reasons. If making a brand a verb is a measure of success and awareness, then Uber made it as “Uber it” seats quite happily with “Google it”.

Having presenters on tech conference stages talk about how millennials will only want an Uber account, not a car or judging by how many people we know to use these services and are anxiously waiting for their driving days to be over, might not be the best way to access how realistic or inclusive this future is.

At Creative Strategies, we just conducted a study across 1,000 US consumers of their preferences when it comes to commuting as well as their expectations about the future of it. The key takeaway is brands and industry watchers might want to consider the reality of how America feels about this topic is not dissimilar to how America felt about the recent US presidential election. Urban vs. rural, millennials vs. baby-boomers, higher income vs. lower income, male vs. female. All play a role in separating reality from fantasy.

Ride Sharing Services are Growing Thanks to a Few

If you look at where most Uber and Lyft users (based on the app phone usage) are coming from, it is quite easy to spot an income gap in the userbase. 31% of Uber app usage and 24% of Lyft app usage over the last quarter of 2016 came from American consumers falling in the top 25% income bracket.

In our research, only 18% of the consumers interviewed said they use Uber. Another 4% stated they use Lyft. Interestingly, 7.5% stated they shifted from Uber to Lyft as Uber has been in the news for all the wrong reasons. As one would expect, usage grows among early tech adopters. 29% of them say they use Uber with 8% saying they use Lyft. What is also interesting, however, is this market segment seems to be even more sensitive to the news surrounding Uber. An extraordinary 25% said they switched from Uber to Lyft. Millennials who, as a group, are usually portrayed as having a high social responsibility, do not seem to be impacted by the negativity surrounding Uber as only 11% said they shifted away to use Lyft.

The majority of Americans still own a car (82% to be exact) and another 9% have access to one they share with a family member. Ownership declines slightly among millennials to 72%, while car sharing rises to 14%. When it comes to planning their commute, only 4% will consider whether to drive their car or use a ride-share service and another 2% would consider whether to use a car service or a ride-share service.

It seems safe to say the death of car ownership is highly exaggerated.

Design Trumps Safety and Environment

I have been arguing for quite some time that, while we wait for self-driving cars to get to market, there is a lot of value car manufacturers can deliver, especially around safety. Sadly, however, safety is not the primary factor that drives consumers’ purchase. Design beats safety 50% to 24%. Safety is even less of a priority among early tech adopters. 55% mention design as the key driver and only 10% mention safety.

Safety was, however, on the minds of those consumers planning to replace their car over the next 12 months. 49% indicated blind spot warning systems, 49% mentioned parking cameras and 36% said auto-braking for collision avoidance were all things they were looking at. Although those potential buyers might not see these individual features as adding to their safety, they certainly do. The fact consumers do not think of these features as safety ones is interesting, as it could pose a challenge on how to advertise them in a commercial, calling more for an individual show and tell than an overarching claim of safety.

Fully electric cars score high with early tech adopters. 52% see fully electric capability as a key feature of their next car. Only 17% of the supposed environmentally conscious millennials prioritize a fully electric car for their next purchase.

Early Tech Adopters, not Millennials, are Ready for Self-Driving Cars

Big brands like Apple, Google, Tesla and more might all be in the race to deliver self-driving cars but consumers are certainly not holding their breath.

Only 11% of the consumers we interviewed said they are looking forward to computers taking over the driving and 29% stated they would never be seen in a self-driving car. Interestingly, 21% trust the technology but believe regulations will take a long time to make self-driving cars a reality. Early tech adopters are more open to the idea, with 29% looking forward to having computers take over driving and another 26% looking forward to using the time to catch up on reading or other content.

Who consumers trust with bringing to market a reliable self-driving car is not a done deal, at least when it comes to the runner-up brands. Tesla is the winner across all segments but the number two and three spots change quite a bit, depending on the group you are looking at. Overall, 28% of consumers believe in Tesla while 24% believe it will be a traditional car manufacturer. Among early tech adopters, the support goes to more tech brands, with Apple at 30% and Google at 17% but Tesla still best encapsulating the blend between cars and tech with 34%. Millennials also see Tesla as the brand most likely to deliver a reliable self-driving car. 38% in the group mentioning the Tesla brand. The number two choice is Google at 23% followed by traditional car manufacturers at 15% and Apple at 13%. Women hold almost similar faith in traditional car manufacturers (28%) as they do Tesla (27%). With men, the split between the two is 22% to 28% for Tesla.

I am not surprised consumers don’t have it all figured out when it comes to the future of commuting. A world where we no longer own cars and rely on self-driving ones when we are a passenger is quite different from today. A challenge for all involved though, when it comes to brand trust and intent, is that consumers are certainly influenced by how vocal companies are about their plans.

Imagining the Future of Commuting is Harder than Imagining the Future of Computing

How consumers feel about cars and their commute today, coupled with the uncertainty about artificial intelligence taking over from humans, shows how imagining how different our commute will be in 10 years is more complicated than thinking we could take our phone or PC outside of our home or office. Harder not only because it questions our beliefs in technology but because it wants to change habits and practices that have been set for decades. Think about waiting to turn 16 or 18 to get your driving license and how life empowering it is getting your first car. How can getting your own Uber account make up for that? How can you trust a computer in a car not to crash as much as your PC or phone does and knowing your life and the life of others depends on it? Big questions I am not sure consumers have answers for yet.

Millennials will Drive the Digital Transformation of the Workplace

If you ask ten different organizations what digital transformation is, you will likely get ten different answers. As is often the case, the answer depends on where each organization is in the process of integrating technology into their workflow. Many believe digital transformation means to get rid of paper, which of course is an oversimplification and not entirely the point. Others believe it is about using technology to do the same things we have always done. In other words, much of the focus is on digital and not so much on transformation.

Mobile was a Test Run

Let’s be honest, enterprise did not see mobile coming. Sure, they saw mobile phones but the impact smartphones would have on their IT department and business was never clearly understood until it was upon them. Smartphones were the start of employees’ empowerment. Carrier subsidies took away the cost barrier for the latest technology, making it accessible to the masses and those masses wanted to use that technology at work, not just at home.

“Bring Your Own Device” (BYOD) could have never been a trend when technology was so expensive only a few could afford it. We went from wanting to take home the PC we used at work to bring to work the smartphone we used at home. Smartphones were apps’ Trojan horse. Once we had our phones with us in the office, we wanted to continue to use the same applications and services we used at home. So, to BYOD we added BYOA (“Bring Your Own App”) as it was about the overall experience new mobile platforms such as iOS and Android were delivering.

Most organizations went through three phases: denial, resistance, and acceptance. Denial lasted a few years as devices came through the back door, then came a few years of resistance trying to impose mobile device management tools to limit what users could do, all in the name of security. Finally, we got acceptance with iOS now present in most Fortune 500 organizations recording a satisfaction rate of 96%.

The Rise of Millennials in the Workforce

What played to enterprise’s favor, at least a little, was the fact not everyone in their organizations, especially their C-suites, was quick to adopt these new devices and apps. That digital divide is going away faster and faster as new graduates get hired and younger managers move up the corporate ladders.

According to the U.S Census Bureau, millennials surpassed Generation X as the largest part of the American workforce back in 2015. Projections put millennials as comprising more than one of three adults in America by 2020 and 75% of the workforce by 2025.

It goes without saying millennials are very tech savvy. But the differences with baby boomers do not end there. Research has shown boomers identify their strengths as hardworking, optimistic, and used to navigating in organizations with large corporate hierarchies, rather than flat management structures and teamwork-based job roles. Millennials are quite drastically different: well educated, self-confident, multi-taskers who prefer to work in teams rather than as an individual and have a good work/life balance.

A recent study by Merrill Edge showed millennials have very different priorities in life compared to boomers. With the focus on personal achievements, millennials want to work at their dream job (42% vs. 23%) and travel the world (37% vs. 21%).

What is a Millennial’s Dream Job?

At Creative Strategies, we asked over 1,400 18 to 24 years old in the US what would make them not choose a company to work for after they were offered a job. While 35% were just happy to get a job, 46% would see not being able to work flexible hours as a dealbreaker. 21% would walk away from a job that did not let them use a smartphone for work in conjunction with their laptop or desktop, while another 17% could not tolerate an IT department that restricts what can be done with a smartphone. Finally, 14% could not be in a job that did not offer collaboration practices that fit their desired workflow, such as using apps like Google Docs or Slack, as well as video conference support.

Workflow is different for millennials. Aside from prioritizing collaboration, 65% said their preferred method to communicate is messaging apps. When it comes to collaboration, Google reigns supreme with 81% of US millennials regularly using Google Docs, 62% Google search, 59% Google Mail. Outside of Google, Apple iMessage scored the highest, with 57% of millennials saying they regularly rely on it, followed by Microsoft Word with 51%.

When it comes to devices, given a choice of laptop brands by their employer, there are only two brands that seem to matter: 62% would pick an Apple Mac and 14% would choose a Microsoft Surface Pro. Mobility is also no longer a “nice to have”. 34% of millennials say it is extremely important that the software, services and business processes they use for work are available on mobile as well as on a laptop. Finally, when coming into a job, 46% would prefer to be able to choose what laptop is given to them.

Digital Transformation to Attract and Retain

Transforming your business by embracing technology and the innovation technology empowers when it comes to business models and workflows, is necessary to attract talented employees. If that was not enough of a driver for companies, they should think about where the big spenders will come from. If 75% of the workforce by 2025 will be made up of millennials, where do you think the largest source of revenue will come from for businesses around the US? Where will the buying power be if not with millennials? Businesses will need to embrace digital transformation to deliver what their future customers will want.

Everybody Wants a Bite of iOS, Apple remains Mostly Self-Contained

A few hours after publishing this column, Google could be announcing that Google Assistant is going to iOS. Last week, Microsoft announced several new features for Windows 10 Fall Creator Edition, such as Pick Up Where You Left Off and OneDrive Files on Demand, will be available on iOS.

Everybody wants a piece of iOS or better, everybody wants to get to the most valuable consumers out there. You’ve heard this before — Apple customers are very valuable. You only have to look at what they spend on hardware and the growing revenue they drive at the App Store and subscription services to get an idea as to why other ecosystem owners might want to get to them.

Not Having a Horse in the Race makes You Free

When your main source of revenue is not hardware to be device, and, to some extent, platform agnostic, becomes so much easier. For Microsoft and Google, the core business revolves around cloud and advertizing respectively and, while they sell their own devices as well as monetize from their operating systems, they have made the decision to engage with consumers on iOS.

For Microsoft having Office, OneDrive and Cortana available on iOS and partly on Android allows them to reach more users than they would through their PCs alone. Of course, Microsoft has nothing to lose in mobile, as Windows Phone has never been able to get more than single digit market share in the US. Yet, this tactic is not limited to phones. These apps and services are also available on iPad and Mac, segments where Microsoft and its Windows partners have a very strong interest.

Microsoft’s long-term play was described very well at their Build Conference Keynote with the slogan “Windows 10 PCs heart all devices.” I would have gone a step further and said, “Windows 10 heart all devices” but that would not have been very politically correct towards their partners. Whichever slogan you prefer, the idea behind it is spot on. Let users pick what phone they want to use (or tablet, or wearable) but make sure that, if they have one Windows 10 device, their experience across devices is the best one they could have. By getting the best experience as a consumer, you want to continue to stay engaged and you choose services and apps delivered by Microsoft over what comes pre-installed on the phone.

Google has always had a pretty agnostic platform approach when it came to its apps and services. The experience is often better on Android but it does not mean consumers do not get benefits from using apps and services on other platforms and devices. Google Maps and Chrome might be the best example thus far but soon it might well be Google Assistant. While other platforms might limit how deep of an integration assistants such as Google and Cortana might have, they are still delivering some value to the user and they collect valuable information for the provider.

As we move from a mobile-first to a cloud-first and AI-first world, knowing your users so you can better serve them will be key. Google hoped to do that with Android but, unfortunately, despite million and millions of users owning Android-based devices, it did not provide the return Google was hoping for. Users of Android simply do not equate to users of Google services. So, making sure to get to the valuable users is key for this next phase, especially as the bond with the user will be so much tighter than any hardware or single service has been able to provide before.

Hardware as a Means to an End

Selling hardware can be a great source of revenue, as Apple can tell you. For Amazon, Google and, to some extent Microsoft, however, hardware is more a means to an end than a source of revenue any of these companies will ever be able to depend on.

Being able to personify or, in this case, objectify, the vision they have for their services and apps is key. Whether it is a home for Alexa and Google Assistant or a TV for Prime Video or an in-car experience for Google Maps, it is important users experience the best implementation of that end to end vision.

Yet, if your business stability does not depend on it, you are not spending marketing dollars to convince buyers to switch their devices or upgrade them. You are instead focusing on delivering the best value wherever you can. As you move to other hardware, however, you take value away. When there is no value left, the hardware itself will look much less appealing to the most demanding users, increasing the risk of churn. Ben Thomson recently made this very point about Apple in China where iPhone users are so engaged with services from local players the value of Apple is reduced compared to what we could experience here in the US where we might subscribe to Apple Music, use Apple Pay and so on.

Follow the Money

So where does this leave Apple and its hardware-centric business model? Well, if you have been paying attention to recent earnings calls, this leaves Apple pivoting from hardware to services, with revenues reaching their highest value yet at $7 billion. App Store revenue is growing 40% year over year with an installed base of subscribers at 165 million customers and Apple Pay transactions are up 450% over 2016.

For now, it does not look like Apple has much to worry about. Not only are the most valuable customers on iOS and macOS but they are engaged with the services and apps on offer. As the offensive from other players intensifies, however, Apple should look at playing a similar game, even if this means opening up some of its services and apps to other platforms.

Microsoft proudly announced last week that iTunes will be coming to the Windows 10 Store. Many were quick to point out that nobody really uses iTunes anymore but that seems to me a very iOS-centric view. There are still many PC users that use iTunes and they represent an untapped opportunity for Apple Music, a service they might not consider using on their phones but, as part of iTunes on their PC, could look very appealing.

There are stickier services like iMessage or Apple Pay and Siri that could drive engagement through other devices. Think about the ability to iMessage on a PC instead of using Skype. Or the option to create an Apple Pay account that works in other browsers. Or Siri that speaks to you through your appliances.

Finding the right balance between too closed or too open is not easy. We know how open can hurt interoperability but we also know how closed can limit growth. This is not about defending. That can be done by making sure to deliver a superior experience on Apple hardware so that, no matter what other apps and services are available, users will never consider anything but what is pre-installed. It’s rather about making sure no opportunity is left untapped which means to go and get the money to be had.

Amazon not Standing Still in Pursuit of Voice-First Homes

On Tuesday, Amazon launched Echo Show. After weeks of speculation, and a few leaked pictures, we finally have it: Alexa has a screen. You can now see music lyrics with Amazon Music, video clips, cameras, live video calls, Prime photos, recipes from YouTube, and more. You can still navigate all of that with your voice despite the 7″ screen being touch-enabled. Priced at $229.99, Echo Show is available for preorders now and ships on June 28th. I had the opportunity to sit through an extensive demo of the device and was surprised at how much I liked the screen.

Show and Tell

When the first rumors around a possible Echo with a screen started to circulate a few months ago, I was quite vocal in my disapproval. My big concern was adding a screen might take away from the voice-first experience Alexa is supposed to deliver. I saw the screen as a big risk at a time when many consumers are easily falling back into old habits which, for most in the home, means going back to typing on our phones.

It seems as though current Echo owners might not be as worried as I was, however. In a recent study Creative Strategies ran in collaboration with Experian, consumers were asked a long list of questions about their current usage preference and satisfaction as well how they felt about some statements aimed at capturing their perception of Alexa and sentiments on possible new features. 20% of consumers strongly agreed and 32% somewhat agree with the following statement: “I wish my Amazon Echo had a screen (to display text or photos, visualize search results etc…).”

When the Amazon team shared the name Echo Show and then proceeded to walk me through the features, it was not difficult to see why they chose that name. The screen is not about watching, displaying, or viewing. It really is about showing some of the things Alexa would take too long to tell you. It is about complementing your experience and still using voice-first as your main input.

My initial concern went away because Echo Show is not trying to be many different things at once or replacing other viewing devices in your homes. It is really all about adding value to Alexa for tasks where showing you the information rather than telling you make sense. So, if you are asking Alexa what the weather in Seattle is, being able to show you the forecast for the next 5 days vs. having Alexa tell you makes more sense. Or being able to show you the trailer of a movie after you ask what is playing at the cinema. For the user, there is no change in behavior required compared to the original Echo.

Want to Say a Quick Hello to the Family? Just Drop In

One feature that has not gotten much coverage, probably because it is difficult to explain if you have not experienced it, is Drop In. With Echo Show and the Alexa app, you can just say “Alexa, drop in on X” and it will make the Echo Show connect via the camera to the home you are calling. The receiver of the call will have 10 seconds to accept or turn off the video while the person who originated the call sees a frosted glass effect. This is clearly not for all your contacts. If you would not like a person to drop by at your front door unannounced, he or she should not be able to Drop In on you either. The fact Amazon has Drop In off by default and the user needs to enable it tells you they do not think of this as central to the experience but as a nice addition for some people.

Personally, I know I will love to drop in on my family when I travel or even on my dogs and cat when we are not home. I also know I can count on one hand the people that will be able to drop in on us. Amazon has taken great care in giving you time to accept the video call as video and audio or audio only but I still think the user needs to figure out what this feature adds to their experience: an easier and more personal way of calling someone or home/pet/older relative monitoring? It is about the quality of your interaction with a few “special” people rather than the number of people you can share this with.

Although some people might compare Drop in to Google’s Knock Knock, I think the use case is quite different as Drop in is not the main way you will communicate with people through Alexa but it might be the preferred one when it comes to a select number of people. Adding communications to Alexa was not something our panel felt as strongly about as having a screen but there was certainly an interest. When asked how they felt about the statement: “I would be interested in using my Echo like a phone to communicate with others”, only 11% strongly agreed and another 24% somewhat agreed. The less clear stand on adding communication capabilities to Alexa has more to do with how much we rely on our phones for communication. As we are not planning to go without a phone anytime soon, Alexa might seem a little superfluous to some.

Priced to Sell

At $229.99 and two for $349.99, Echo Show is aggressively priced. Some might think this is Amazon just trying to get ahead of Google, Microsoft and possibly Apple’s upcoming announcements in the space. However, I believe this is Amazon continuing on its path to make sure we have as many Echo devices as possible. It is quite clear from our data Echo owners are using different devices in different rooms already. While Echo seems to reign supreme in the kitchen (35%) and the living room (23%), Echo Dot fits first and foremost in the bedroom (24%) and then in the living room and kitchen equally (18%).

The speakers in Echo Show are meant to be even better than the original Echo but positioning this device in your home might require a bit more planning simply because, while your voice might travel, you do want to make sure Echo Show is where you can best take advantage of the screen. Personally, I think the kitchen counter is the best place because there are many use cases that fit the kitchen — following along on how to prepare a dish or morning briefings that now also support video content. The kitchen, at least for now, is probably the room where Echo Show has the least competition when it comes to screens, giving it the best opportunity to show off its capabilities.

Prioritizing Engagement

The race for the control of the connected home experience is far from over but it is clear Amazon not only had a considerable head-start but it is clearly committed to this space. Nothing says it better than building a portfolio of products with different use cases and price points. Long-term engagement with these devices is critical and discovering skills plays right into that. Alexa has now more than 12,000 skills but let’s not get caught up in the same game we played with App Stores. It is really the quality of those skills, not the quantity that will make the difference to my experience as a user. I hope the screen in the new Echo does not become the star of the Show but the best actor in a supporting role for Alexa.

Microsoft’s Two-Pronged Approach to Education

On Tuesday, Microsoft held an event in New York where it presented its new version of Windows called Windows 10 S as well as the new Surface Laptop. With the combination of the two, plus apps targeted at teachers and educators, Microsoft is hoping to gain traction in the K-12 as well as higher education.

In January 2017 at the BETT show in London, Microsoft announced “Intune for Education” which delivered a simple device management solution for schools that can customize over 150 settings, apply them to hardware and apps, and assign them to a student so they “follow” any device they use as they log in. Microsoft also announced a partnership with Acer, HP, and Lenovo to bring to market Windows 10 PCs starting at $189 including some 2-in-1s.

Chromebooks have been steadily growing in US education market which, according to FutureSource Consulting, represented close to 13 million units in 2016, of which 58% were Chromebooks. While most of the commentary around Chromebooks’ success rests on hardware pricing, there is a lot in the simplicity of the platform that is a big appeal for schools. However, with prices as low as $120, competing against Chromebooks is not an easy task.

Windows 10 S aims at taking Microsoft a step further from what we have seen thus far, especially when it comes to the initial set up of devices and subsequent management. By stripping down Windows 10 to its essential components and granting access to only store apps, Microsoft is hoping to deliver the simplicity schools are looking for.

Windows 10 S will need OEM Support to make a Difference

The battle in education is, however, a Windows/Microsoft battle for now, not an OEM battle, as most Microsoft hardware partners are selling Chromebooks. While Microsoft announced a list of partners that will bring to market Windows 10 S devices, the commitment will be judged on how many models, channel support, and overall push we will see from brands such as HP, Acer, and Dell.

No details have been given on the royalty OEMs will pay Microsoft for preloading Windows 10 S and how that differs from Windows 10 Home and Windows 10 Pro. Nor have we heard whether Microsoft will help in any other way, such as marketing, to position the devices. My guess is Microsoft will have to do something, at least initially, so that Windows 10 S actually gets a shot to prove itself.

The Surface Laptop competes with the MacBook Air not Chromebooks

Looking at the Surface Laptop Microsoft announced during the event and dismissing Microsoft’s chances to compete against Chromebooks is a mistake. Surface Laptop, in my mind, has a different role to play.

First, it plays to Millennials’ need for a laptop form factor vs. a 2-in-1 or a tablet. In a recent study Creative Strategies conducted in the US, college students clearly shared their preference for a traditional laptop form factor with 73% primarily using a laptop when working on a school or work project.

Second, the Surface Laptop aims at picking up higher ed students who, in the past, might have picked up a MacBook Air. Eighty-eight percent of Mac users in the Creative Strategy study said they would pick a Mac if their employer offered them a choice. 9% said they would pick a Surface. Surface was the only Windows-based brand to register any real interest among the overall panel with 16% of Millennials mentioning Surface as the brand they would choose. If we exclude Apple from the brand option and only consider brands within the Windows ecosystem, then the preference for Surface grew to 43%. If you are not convinced, just watch the video Microsoft played at the launch. It is a love affair between you, the user, and Surface Laptop. They could not have made it more personal if they tried. I guarantee you, that is not how a school administrator picks hardware.

Lastly, Surface Laptop can appeal to those enterprises invested in the Windows ecosystem but who are looking for more affordable Surface hardware and a more traditional form factor. If they have not yet embraced Windows 10 apps, enterprises can upgrade Surface Laptop to Windows 10 Pro.

Windows 10 S has a Role to Play Outside Education

While the focus of Microsoft’s event was education, I see Windows 10 S playing a role in other areas as well, although Microsoft did the right thing by not talking about it at the event. People need time to get their head around Windows 10 S and trying to make it something for everybody would have been too confusing.

I see Windows 10 S as the modern implementation of the Windows ecosystem, one that puts Windows 10 apps right in the middle of the experience. Because of this, I see Windows 10 S appealing to consumers who want a mobile-first experience and are not concerned about support for legacy apps. I also see Windows 10 S potentially appealing to enterprises that have already transitioned to a Windows 10 app environment.

From a consumer perspective, I hope to hear more from Microsoft next week at Build how they are planning to help developers invest more in Store apps. This is going to make a huge difference in how users see their devices going forward – productivity only to one-stop device for both work and play. There is no question Microsoft has been putting a lot of effort in first party apps but more needs to be done for developers so the vision of inking, mixed reality, and 3D printing is brought to life sooner rather than later.

As to be expected, a lot of attention was given to Surface and the Windows 10 S but the other tools Microsoft launched today, such as Minecraft Code Builder, Microsoft Teams for education, and the STEM programs and camps really show the full commitment, not just to education but to the next generation of Windows users.

Can Tech make Me a Fashionista?

Last week, Amazon was awarded a patent for an on-demand manufacturing system designed to quickly produce clothing and other products — linen and curtains and such — only after they have been ordered. Amazon applied for the patent in late 2015 and, since then, they has been growing their fashion inventory as well as its own clothing brands. According to a Bloomberg report published in September 2016, Amazon was named the biggest online clothing seller. Amazon got to that position by adding items directly proportional to the confidence consumers had in buying online. Starting out with shoes (easy to size) and T-shirts (a relatively modest investment and also easy to size), Amazon grew its range, building from basic items to fashion powerhouse names such as Kate Spade, Vince, Ted Baker, and Michael Kors, just to name a few.

According to a recent report on commerce by GWI, 20% of online consumers in the US bought clothes online in the last quarter of 2016. Another 14% bought shoes. If you don’t think that’s significant, what if I told you that only 14% of consumers bought online the item that “killed” brick and mortar stores: books.

Consumers are becoming more comfortable with buying clothes, shoes, and accessories online but new ways of selling and new technologies can push this market even further by making the whole experience more personal.

Fashion as a Service

Subscription services in shopping have been growing in popularity over the past few years. What in most cases started with organic fruit and vegetables, soon developed to include razors, toothbrushes, dog treats, toys and, more recently, fashion items. Several companies deliver shirts and lingerie on a monthly or quarterly basis to happy but busy customers who like the consistency of a brand they love being delivered to them.

But the model is changing. While Uber and Lyft are getting all the publicity for revolutionizing transport and possibly drive – no pun intended – consumers away from owning cars to simply ordering a car, fashion has also been moving to a more hybrid subscription rental service. Le Tote is a good example of a successful service. They deliver a tote with items based on style and fit as well as personal preferences. You wear anything in your tote for as long as you want, then send it back when you are done, ready for a new order. If there is something you like, you can keep it and buy at a discounted price.

The ability to change your wardrobe collection often with trendy clothes that fit your lifestyle needs coupled with the convenience of delivery is certainly something busy women, or women who do not enjoy the shopping, experience can appreciate. Adding further customization to the fit of the clothes would drive more people to try this kind of service and is where new technologies such as AR and connected sensors can play a role.

Visual Computing and the Buying Experience

With Augmented Reality and Virtual Reality coming to our phones and PCs, we see the potential for shopping experiences to be redefined. For example, being able to see on your walls how a color you picked will go with your furniture, size a new sofa in your family room or try your new car on for size without having to go to a dealership, is becoming a reality thanks to VR.

The possibilities are endless and fashion can benefit from this too. Already today there are apps that allow you to try an item on, such as glasses or a hat, via a picture of you. There have also been services that will ask you questions about your size, weight, ethnicity, pants, and collar sizes then offer what they claim is the closest thing to a tailored garment. Some use a combination of the two methods and marry your inputted information with your picture to come up with a custom solution. Custom clothing company MTailor takes it a step further and offers an app that can measure you with the camera on your phone and deliver custom shirts, suits, and jeans.

These solutions have been relying on 2D pictures and inputted info which have plenty of room for error. With smart fabric and sensors being added to clothing, there are more options now to properly measure size and use that information to find the right clothing. LikeAGlove started a couple of years ago to use leggings to measure your shape and then transfer the data to an app. Aimed at people who are on a fitness program to lose weight, they claim to better measure your progress compared to a scale that would not help you measure how your body shape changes as you lose the pounds. The app also offers help in finding the jeans brands and models that best fit your shape.

If you combined sensors for shape tracking and AR, you could see how certain designs would look on you and then have them tailored to your shape then custom-made and delivered. Amazon announced today Echo Look, an Alexa-enabled camera that lets you take pictures and short videos using built-in LED lighting and a depth-sensing camera with computer vision-based background blur. Echo Look will let you see yourself from every angle and offer a second opinion, thanks to AI, on which outfit is best as well as suggest brands and items based on the images you collect in your style book.

Bots and Digital Assistants as Stylists

With so many businesses focusing on bots and big ecosystem players focusing on Digital Assistants, I would expect both will be able to serve my needs when it comes to shopping for clothes and accessories. Store-dedicated bots could help navigate through the latest collections or cross-store bots could fetch the item I want/need at the best price and delivery option. Offering a personal shopper that has information about your tastes, as well as look and size, could be a differentiation customers are either prepared to pay for or see as an added benefit in an all-included service. The focus here would be more on an actual shopping experience rather than on tailored clothing for those consumers who do enjoy shopping online and like to do so efficiently but, most importantly, they want to know they bought what best fits their needs.

For a more customized experience that shifts from a personal shopper to a “lady in waiting”, think how great it would be if my assistant could suggest my daily outfit based on weather and the appointments on my calendar. That would be the perfect solution to busy people who do not want to default to having to wear a gray t-shirt every day.

There is no question technology will continue to change the way I shop for clothes. What I want is for tech to help me find what I need, what fits, and what is best priced, all nicely wrapped up in a box, delivered to my door. Tech might still fail to make me a fashionista but it would have succeeded in making me a very happy shopper.

Are Mature Markets Poisoning Emerging Markets’ Tech Experience?

The other day, I was reading this fascinating and scary story of a woman in Kenya who thought she was carrying the HIV virus because an app told her so. The app was a hoax but she could not have known it as she had downloaded the app over Bluetooth from a friend and never got to read the reviews that warned about the scam. The BBC story was centered on a report funded and commissioned by the Bill & Melinda Foundation and developed by the Mozilla Foundation in close collaboration with Digital Divide Data.

The report offers a very interesting snapshot of what technology and smartphones mean in a country like Kenya. The good is how smartphones can help Kenyans, especially low-income earners, not to feel left out of society. The bad is online gambling becoming a larger issue. The ugly is what consumers in emerging market discover, especially when new to smartphones, is their experience is shaped by trends set by larger international organizations that control the ecosystem.

This last point made me think about how different the smartphone market is compared to the feature phone market and not just because the hardware is different.

Feature phones were More Customized than Smartphones

Going back to 2009/2010, emerging markets were the future of mobile as the overall mobile phone market was made up by dumb phones and manufacturers had a more focused portfolio for emerging markets. The race to control emerging markets was very much open, as Nokia fiercely defended its position in markets such as Africa, Latin America, and Asia.

What was unique about Nokia was that, even back then, their focus was on services as well as hardware. While lowering the price of feature phones, Nokia focused on lowering the requirements for data consumption, making some of their services such as music and maps available offline. Nokia also implemented a financing service for small businesses as well as a service focused on allowing users to send money called Nokia Pay.

These were the years when most of the hardware was not yet touch-based and was customized with keyboards that reflected the different languages. Applications were also pre-loaded to reflect local cultural preferences. Aside from possibly Latin America, which endured for years the hand-me-downs from the US, consumers in emerging markets were given devices that mostly reflected their needs.

These were also the years before local manufacturers, empowered by Android, started to make a dent in the market share of tier-one players and fragment the market in such a way that replicating what Nokia had became impossible due to the lack of economies of scale.

With the shift to smartphones and the pressure on margins, many vendors are prioritizing high growth markets such as China and India while trying to serve the rest of the emerging markets by leveraging what they have in the portfolio rather than customizing to the country’s needs.

Software: One Size Fits All

With the advent of smartphones, touch, and the shift to software, customization was no longer needed to be able to sell in a specific country. One phone model was shipping across more markets than ever before as most settings were delivered via software. Apps were no longer pre-installed but could be accessed through app stores that offered more international content than they did local. While software might overcome language barriers, it has less success overcoming cultural differences. Apps suitable in America or Europe might not be so in the Middle East or Africa where much of the female population is highly dependent on the men in their lives to grant them access to technology, for instance.

While the size of the emerging market population is still very appealing to hardware vendors, it is not always so compelling to developers and service providers. Tier-one developers and service providers might lack the cultural knowledge to customize and they might see the connectivity challenges and low-income barrier as issues that will always dampen their opportunity, making the investment less than worthwhile.

Many emerging markets are also mobile-only rather than mobile-first markets, making the relationship consumers have with technology quite unique. Many emerging market smartphone users have no measure of comparison for what the digital world can deliver, which makes them vulnerable to exploits. The case reported by the BBC is a very good example. Esther did not know her phone could not possibly diagnose whether she had HIV through the reading of her fingerprint on the screen. For all she knew, technology is that good.

Are Emerging Markets a Duty or an Opportunity?

Tech giants cannot ignore emerging markets in their path to world domination. Google tried through the Android One program to lower the price of smartphones in emerging markets so Android could continue to grow. However, the strength of local, combined with the little differentiation the program was giving to the devices, lead to a weak value proposition both for partners and customers alike. Plus, focusing on an online channel in markets that mainly sell via small mom and pop shops did not really help. Google also focused on improving connectivity by flying internet balloons — an endeavor that is taking much longer than first anticipated to become a reality.

Facebook started a couple of years ago with tweaking its user experience for emerging markets so the content the user was looking at was prioritized and loaded first over side stories which would not be loaded. It also launched an accelerator program to come up with ideas that make advertising rewards relevant to local users.

Finally, Facebook also focused on connectivity first with Free Basics. Users do not pay for using Facebook and other apps but some governments, like India, found it too limited. More recently, Facebook launched Express Wi-Fi in India, as a renewed attempt to offer connectivity at minimum cost by the deployment of public Wi-Fi.

The hurdles both companies have faced, however, underline the challenges of looking at emerging markets from the comfort of Silicon Valley. International telcos with interests in emerging markets such as Telenor, Megafon, and Vimpelcom (now Veon) are also trying to get a slice of the pie and they might just have the advantage of having a ton of data on the very consumers they want to serve.

The Win-Win when Tech Improves Life in Emerging Markets

Rather than focusing on lowering costs or lowering services requirements so consumers in emerging markets can afford to buy devices and subscribe to services, tech companies should focus on improving life in emerging markets. Technology should be used to improve education, eradicate diseases, improve housing and transportation and ultimately create more wealth and empower people to become a potential customer for Amazon, Google or Facebook. While the final goal might be the same and generate market growth, I would very much argue the means to an end would be much more rewarding for emerging markets.

When You need to Explain to Your Kid the Internet Is not Safe

Last week was time for me to explain to my child the internet isn’t a safe place. It wasn’t pretty. My nine-year-old daughter has been going online on a parental controlled browser and to play multi-gamer Minecraft with her friends but nothing else — or so I thought. Last week, she mentioned playing with these “friends” on an app that lets you create a family of dogs. I remained calm as I explained we had discussed this issue before and that she was not allowed to go online because people on the internet are not always who they seem to be and they might ask her questions that are personal. With a somewhat annoyed tone, she replied that she is not naïve and that when “this boy” asked her how old she was and where she lived she did not reply. That is when I freaked out. I took a deep breath and started explaining.

Just because You are not Face to Face with Someone, doesn’t make it Safer

While not being physically in the same room or playground might mean you do not get punched or pushed or mocked, it does not mean they cannot hurt you. Just because you do not see them, it does not mean they are not real. That was the easy part.

“But mom, they are just kids like me!” My heartbroken daughter whispered. That was when the hard part started. Explaining that people online can pretend to be kids and they might be interested in her the way grownups are interested in each other was the hardest thing I ever had to explain. Much harder than explaining where babies come from. Within a couple of minutes, my daughter went from my sweet little girl to the potential victim of an online predator. I know I might be overreacting. I know there are more genuine kids online than there are predators but there are also numbers. According to the US Department of Justice, approximately 1 in 7 (13%) youth internet users received unwanted sexual solicitations. One in 25 youths received an online sexual solicitation in which the solicitor tried to make offline contact. So, forgive me but it’s my baby and I am not taking any chances. As much as I think she is too young to fully understand what I am talking about, it is my duty as a parent not to scare her but to make her aware of the risks. This is no different than telling your children they should not talk to strangers the first time they are somewhere without you at their side.

Technology Alone is not Enough

There are many risks our children are subjected when going online. Some involve their information and data and others involved them as a person. In a way, I look at the former as security risks and the latter as safety risks. While tools can help with many security risks, it is only education and awareness that will help with safety risks, in my view. A key part of this education is to help them understand the internet is not just magic. There is a real human behind anything that happens online whether that presence is direct or through software programmed by a person. Educating, not scolding, so my daughter feels like she can come to me and ask questions is important and, of course, challenging.

The good old days of loading an antivirus app and restricting access are over. Phones and tablets have changed that dramatically and, although parental control tools for these devices have been growing over the past few years, they concentrate on the web vs apps which makes the whole “being safe” more complex. The small screen these devices have also offer less visibility to parents compared to a console game played on the TV in the family room. This means we cannot just “fix” it with technology. We need to take an interest. Whether we monitor the apps our kids use or we vet every app before they use it, it is up to us to keep up with the whole process.

I dropped the ball. My daughter knows she needs to ask permission before purchasing any app. When that happens, we go through the reviews together to evaluate how good they are and read the description to better understand what is behind the catchy name. Yet, I never thought about vetting the free apps she downloads as we have set up an age filter for the apps she can access. It goes without saying I do now. Clearly, the age filter helps with content appropriateness but not necessarily kids’ safety.

Monkey See, Monkey Do

Fortunately, I do not have to worry about social media yet. At nine, my offspring does not have a social media presence other than what I post about her. And this is, of course, a whole different problem. Because she sees me sharing what we do on Facebook and “talking” to people I do not necessarily know on Twitter, she might think it is OK for her to do the same. As in real life, kids do pick up social cues from us without necessarily having all the information to make an informed decision. So, for some behaviors, leading by example will suffice (“Don’t text and drive”). For others, we will need to educate once again.

I now ask permission before sharing something about her, including writing this article. I explain that, very much like what you say in real life, what you post has implications. I explain why I post, what I post and, more importantly, I explain why I do not post certain things — well aware not all my decisions are actually foolproof.

Learn so You can Teach

Being a parent in a digital world is not easy but one thing is certain — it will be a lot easier if we, as parents, are informed and up to date with what children do. Our kids are growing up in a world full of screens and where social media rules. As parents, we need to make sure we are a step ahead when it comes to technology. If we think today is scary, we should try and imagine what it will be like when our kids will live in a VR world we do not have access to. While we can ask content providers and app stores owners to be more transparent and accountable, the buck stops with us.

Even Millennials Don’t Know What to Do with Tablets

This week marks the 7th anniversary of the iPad’s availability on the market. The Verge reminds us of their initial take on the product. There were two main stands iPad reviewers took back in 2010. Some industry watchers thought the iPad could become the next computing platform — at least for some people. Others believed the iPad would mainly be successful with users with extra disposable income as well as users who wanted a simpler computing experience and did not need much.

Seven years in and the debate remains the same: is the iPad the next computing platform or merely a superfluous device? Apple is certainly trying with the new ads to make us believe the former is true but consumers do not seem to be convinced yet.

I am focusing on the iPad because, although many tablets followed it, none ever came close to the volumes Apple has been able to sell. Even now when sales are in decline, iPad remains the best-selling tablet in the market.

Perception is a Great Hindering Factor

At Creative Strategies, we recently ran an extensive study looking at Millennials’ preference of both devices and apps when it comes to collaboration. It is interesting to look at their device preference for productivity because this is “The Touch Generation”. We focused on 18 to 24 year olds to gather their expectations as they enter the workforce or soon after joining it. This is the age group that experienced the early stages of touch on smartphones the same way as Gen Z is today experiencing many voice-first interactions. They are not only very comfortable with touch but get a lot done with their phone which would put them in an ideal position to understand what they could do with a tablet.

We asked our panel of over 1200 US millennials several questions around how they prefer to collaborate, what devices they use, what app and services they use and what communication medium is their preferred one. One of the question asked them to think about which device they would take on a business trip with them if they knew they had a project due at work. As you can see from the chart below, there was no question about the tablet as the device of preference. Only 12% of male and 16% of female millennials would take a tablet with them. The rest of the panel was pretty evenly split between taking a smartphone or a PC.

It is when you dig into the why they would pick that particular device we get some clarity on where tablets stand. Most millennials who picked a smartphone valued the communication side of the device. Being able to make calls and use messaging apps was the biggest selling point. There was also a consensus that, “anything needs doing can be done on a phone.” On the PC side, the main two drivers were screen size and range of apps. Millennials who would not leave without their PC really appreciated the larger screen real estate and believe there are certain productivity apps they would not be able to run on a phone. Communications mattered to these millennials too, but they thought that, between apps and VoIP calls, they could get the job done. Some were even prepared to go old school and use a landline if absolutely necessary. The bottom line for people choosing the PC was, if you want to do real work, there is no other option.

The smaller percentage of millennials who would take a tablet on their trip are Apple’s sweet spot. They are the ones that understand they can do what they can do on a phone, including communications, and get the larger screen. They referred to the tablet as a happy medium, the best of both worlds, and as a device that gets the job done. Interestingly, a few called out Microsoft Office as an app that makes using a tablet as a main device easy. These are users that believe in the ability of a tablet to be the next computing platform.

Overall, this set of results very much points out the perception that the iPad, the best in class tablet, had back in 2010 — it was only good enough for light productivity. It still rings true today to many consumers, even open minded millennials. What has also negatively affected tablet uptake has been the progress in processing power and screen size many smartphone models have undergone.

Are 2-in-1s a Tablet or a PC?

While analysts and marketers love putting labels on devices, consumers seem to be a bit more pragmatic. If it quacks like a duck and it walks like a duck, chances are it’s a duck!

We wanted to test whether millennials were interested in iPad Pro, Surface, or other Windows 2-in-1s as their primary laptop so we asked. Overall, only 18% of the panel was interested in a Windows 2-in-1 and only 9% was interested in an iPad Pro as a substitute for their laptop. Forty-nine percent came right out and said, “No way, I prefer a traditional laptop form factor”. Another 16% is not convinced touch is useful or needed – so Apple is not alone thinking that a touch screen is not a must-have in a laptop form factor.

Once again, when digging a little deeper and looking at the data by operating system the panelists are currently running on their computer, things get very interesting.

Current Windows 10 users are much more open to the idea of using a 2-in-1 as their main laptop than current Mac users. This highlights two main points: on the one hand, Microsoft and all the OEMs have succeeded in positioning 2-in-1s as PCs. On the other, Mac users still see their devices as superior to an iPad Pro or a 2-in-1. While the difference is less striking when it comes to the role of touch, current Mac users do have more doubts on how much it is needed in a laptop.

My hypothesis that 2-in-1s are seen more as a PC while iPad Pro is still seen more as a tablet is backed up by the data we get if we cross these two questions. When you ask, “Would you consider replacing your laptop with an iPad Pro?”, 22% would take a tablet on their business trip while only 16% of millennials interested in using a 2-in-1 as their main PC would pick a tablet for the trip. This corroborates my hypothesis that iPad Pro has yet to establish itself as a PC and Apple has more work to be done.

While the Windows ecosystem could convince consumers 2-in-1s were PCs mainly through advertising, they had a great advantage over Apple in the operating system that most consumers know as a PC operating system. This means Apple needs to do more than advertise, especially at an enterprise level which is exactly what they are doing with their collaboration with SAP in particular.

If we consider millennials’ attitude to communication in our survey, it is clear the way they communicate has changed quite significantly. Messaging, video calling, and voice through apps not phone-based have taken over. Empowering the work place with apps that take advantage of the iPad and create new workflows will have the same impact. While this might not affect the sales trajectory of the iPad any time soon, I do believe it will make a difference in enterprise for iOS. The big question is whether the iPhone or the iPad will be the main benefactor.

In the Market for a Tablet? No-Brainer to buy an iPad

I am sure you know by now Apple has announced a new iPad model simply called iPad – aka the 5th Generation. This is not quite an update to the Air 2 as some of the features, such as weight and thickness, are the same as the Air.

The fact Apple did not hold an event for the announcement had more to do with not setting high expectations than with the significance of this product. The 9.7” iPad has been the most popular model for Apple. Since moving to the iPad Air line, Apple has been able to please customers who thought they wanted a smaller form factor. In reality, what they wanted was the higher portability that comes with a lighter device. The price drop of the older Mini generation helped buyers who wanted the most affordable iPad but would have not necessarily picked this product based on screen size.

Apple believes there is still a market opportunity for iPad both as people upgrade older models and as they discover iPads for the first time. For many consumers, however, making the jump to buying the first iPad or upgrading to a new model has not been easy. Depending on where you are in the process, there are either cheaper Android alternatives marketed as equals or the iPad you are using still fulfills your needs, making it hard to justify an upgrade.

This week, Apple made buying an iPad simpler and more affordable. The new line up is pretty clear:

    • iPad Mini is no longer the entry level iPad with consumers choosing this option based on form factor rather than price
    • No need to use the Air name as iPads are all lighter and slimmer than they were before the Air was introduced. I made the same argument for the MacBook Air when the new MacBook was announced and we’ll see if I am right
    • 9.7” is not only the most popular size but it is where Apple sees the future of iPad as it plays well in consumer, enterprise, and education. So the price aggressively comes down to $329
    • iPad Pro remains the flagship for people who want the best and/or people who are ready to make the switch from a PC or a Mac and make the iPad Pro their main computing device. The two sizes offer choice depending on your mobility requirements.

Tablets remain a category of device many consumers do not see as necessary. In fact, according to a GWI report, only 5% of online Americans consider a tablet as their most important device to access the internet, whether at home or elsewhere. This compares to 24% for smartphones and 40% for laptops. This lack of a clear role limits how much consumers are prepared to spend on them. Yet, when people use iPads satisfaction is high.

According to J.D. Power, iPads have the highest satisfaction in the category at 830 (out of 1000). Satisfaction is measured across performance, ease of use, features, styling and design and cost. iPads outperform the competition on every factor aside from cost. Apple just addressed this very point.

 Apple is not going to Concede the Education Market to Chromebooks

Tablets are not just a consumer play and Apple is very well aware of that. Over the past year or so, Apple has been focusing on empowering their iPads in the enterprise through partnerships with IBM and SAP. Education is another major market for iPads but lately, Apple has been under pressure from a growing number of Chromebooks being used, especially by K-12 schools.

While $329 is an aggressive price in the consumer market, Apple pushed even more on education and will make the new iPad available through its education channel for $299. Targeted, aggressive pricing is something Apple is willing to do for certain segments and done in a way that does not negatively impact the brand.

Apple also collaborated with Logitech to make a rugged case available through the same channel, priced at around $99. Logitech will also offer an add-on keyboard for the rugged case and a “Rugged Combo”.

While even at this price, Apple remains higher than Chromebooks. But there is more than just iPad Apple brings to the table compared to Chromebooks. Once the price gap closes, the other factors hold a different weight. Apple’s app ecosystem is much larger than what Chromebooks have to offer and the fact Android apps will run on Chromebooks will not make the situation much better. Many of the Android apps available in the store are still not optimized for tablet use, which of course, limits how user-friendly and rich the experience can be. Accessories ecosystem is also a plus for Apple as it lets the iPad better fit in with other tools teachers might be using in the classroom. The last point I think worth mentioning is security. Apple’s strong focus on privacy and security for its devices and the apps that run on them is an added benefit I am sure schools consider. Google and Apple both offer specific education tools to monitor access on the devices to limit vulnerabilities but the cloud/browser-based nature makes that more challenging. Of course, the fact Google Docs work well on iPad is a reassurance for teachers who are invested in those tools.

The education market is certainly becoming a battleground for Google, Apple, and Microsoft. It will be interesting to see who will focus on a more holistic experience that centers on empowering teachers and students to teach and learn vs. facilitating admin and management of kids and staff.

In Other News

There were more Apple announcements on Tuesday.

There was an updated storage option for the iPhone SE that now starts at 32GB for the same price of $399 and a small $50 premium for the 128GB version. This is a sensible move by Apple to future-proof this line for further software upgrades.

To celebrate the ten year anniversary of support for Product RED and the fight against HIV, Apple released a RED iPhone. This is a first for Apple who have done several RED products over the years — iPods, Beats headsets, iPhone and iPad Cases — but never an iPhone.

Lastly, Apple announced Clips, a video editing app that will be available as a free download in April. Despite some confusion on social media, this is NOT a competitor to Snapchat or Instagram. Clips is about creating content to be shared on the social platform of choice. iPhone is used more and more for pictures and videos and giving users the opportunity to add features such as stickers, lenses, and filters makes a lot of sense. Apple is aware, however, that its audience is not made up of social savvy teenagers only. So Clips comes as a separate app rather than being integrated into the main camera app. First and foremost, this approach avoids annoying users who are not interested. It also offers Apple an opportunity to further develop Clips by adding other capabilities in the future – think AR.

It remains to be seen, however, if avid Snapchat and Instagram users will be interested in creating the content in Clips before they share it through their social platforms. If Clips takes off, Apple would have created direct user engagement and would have shifted value back to the hardware and, ultimately, leave to the social network platform the delivery portion of the engagement.

Clips is a great example of the kind of first party apps Apple should be focusing on to add value to hardware. While the wider ecosystem is a great strength and the balance of keeping partners and developers engaged is tricky, there is certainly room for Apple to do more.

Why Today’s Education System is Failing Our Children

Schools are supposed to develop skills and capabilities while encouraging kids to “think differently” and maximize their abilities. Sadly, most schools are failing to do so today. The reasons rest within curricula that are not keeping up with the pace of change our world is undergoing, teachers set in their ways with little support to embrace change, and technology thrown in for good measure often without a clear purpose. In other words, schools continue to operate on fulfilling the needs of an Industrial Age student rather than preparing students for the Information Age.

Some look at the layout of an average classroom used today and argue that education has changed a lot. They go as far as to say the entrenched teacher-centered methods have become a hybrid one that incorporates a student-centered approach. While this is true, the change towards this hybrid approach might relate more to the need to deal with a larger number of students in one class than a real change towards fostering collaboration and critical thinking among students.

The World Economic Forum produced a report in early January called:“The Future of Jobs”. The report looks at what the employment landscape will look like in 2020. After talking to chief human resources and strategy officers from leading global employers, the authors listed the top 10 skills for 2020 and compared them to what was required in 2015:

By the time my third grader will be looking for a job, the change will be even more dramatic. She will have to be ready for what the World Economic Forum calls the “Forth Industrial Revolution that is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres”.

Technology: Friend or Foe?

One cannot talk about how education is changing without assessing the role technology should have in the classroom. Some blame technology, and often the parents in particular who act as facilitators for empowering children (especially in K-12) with knowledge. Apparently, having kids learn through apps puts teachers at a disadvantage in the classroom because they are now faced with students who show different levels of knowledge and skills. It also puts pressure on teachers to rely more on technology, not something many necessarily feel comfortable with. As you can sense from my tone, I don’t think this is a bad thing. I am sure that, to some extent, this is a similar effect TV had on my generation or libraries and newspapers with previous generations. The big difference today is it’s not just about acquiring knowledge. It is about learning in a different way which, I would argue, does put pressure on teachers. Pressure to relate to children in a different way. The same pressure many employers are facing as Millennials, and Gen Z after them, join the workforce.

The main reason why technology creates a challenge is that schools are focused on standardization, not customization. From teaching to tests, schools today are about students fitting a mold and falling within pre-determined parameters that leave little room for individuality, let alone creativity and critical thinking.

Ironically, technology could empower teachers to embrace customization by allowing for a more tailored teaching approach. Children would be able to work more at their own pace, allowing students who are underperforming to focus on improving while letting students who are more advanced to be challenged, resulting in a more engaged class overall.

Adding Devices is not the Same as Integrating Technology

Today, however, most schools use technology, not to transform teaching, but rather to fit around traditional teaching methods, and in most cases, substitute what used to be done on a piece of paper.

Good or bad, children are learning in a very different way today. Children who have access to technology and games such as Minecraft learn a new kind of creativity, one that has no physical limits. If you spend a few minutes talking to your Minecraft-obsessed child (like mine), you can see these pixelated worlds teach them about different materials, food, resource management, project planning, teamwork (if they use multiplayer), problem-solving, responsibility, and accountability for their animals. A whole host of YouTube program–like Mineflix–is also teaching kids how they can learn and express themselves through storytelling which plays a huge role in modern gaming. Children learn more about their world and their building options and try it out themselves. The virtual manipulation Minecraft allows is like Lego creativity on steroids. I am clearly not alone in believing there is something in Minecraft that benefits education. Microsoft created the Minecraft Education Edition and teachers who have used it speak of higher engagement, collaboration, experimentation and a greater sense of accomplishment.

Embracing tools like Minecraft Education Edition can pivot the learning environment from teacher-centered to student-centered, where students not only teach other students but they can help teachers learn. These new methods can help the transition from a textbook-driven method to a research-driven one, from passive learning to active learning, from a fragmented curriculum to an integrated and interdisciplinary one, much like the skills the workplace will require.

There is so much transformational tech coming that I fear schools will miss out if they do not start thinking about integrating it now. Augmented and virtual reality are at the top of the list for how impactful they could make learning science, geography or even history lessons.

Tech, of course, is not just about learning per se but about data analytics as well. Being able to analyze data in real time will have a tremendous effect on teaching methods and help with change. Think about how powerful it is to have insights about students, especially the ones that are “at risk,” so you can actively adapt your planning and teaching activities. For students, the ability to measure their progress with their personal goals would also be motivational and engaging.

Preparing Our Children for an AI-Driven World

Technology is actively transforming the world today and will continue to do so whether schools are ready for it or not. As artificial intelligence grows in importance for tech companies around the world, our education system should focus on giving our children the skills they need to have a successful career when they grow up. This starts with acknowledging the skill sets my generation were taught (and served me well) are no longer critical for my daughter. Critical thinking and problem-solving are the first steps towards coding. Learning how to communicate effectively is at the center of successful collaboration and leadership.

The reason schools are important in this process is unless education takes on the accountability to integrate technology and prepare our children for their future, the divide we see today between classes will only widen.

I am worried about the prospects my child will have in the future, and I am one of the lucky parents who can afford to provide technology to my child. I am a concerned parent in the heart of Silicon Valley with a third grader who is attending a private school. If I am concerned and I live in the bubble, what is it like for children across the country?

At nine years old, she has her own iPad, and a Windows 2-in-1 and has been fortunate enough to have tried VR and new devices such as Amazon Echo that teach her to interact with different computing devices using her voice. I am well aware I am the exception, and when you look at how technology is distributed today, it is anything but an equal opportunity. Tablet penetration is still limited for the most part to households with $100,000 incomes and up. Smartphones are, for many Hispanic and Black Americans, the only way to access the internet due to lack of broadband access in their home.

Schools must close the technology gap, and they need to do so in a meaningful way, not just using tablets and Chromebooks as an alternative to a book or as a time filler. Of course, schools are not going to be able to do that on their own. Funds are needed for acquiring the technology and maintaining it. More importantly, funds and support are needed to allow teachers to learn. A complex process I realize, but one we cannot postpone.

Has Google Set Up Google Home to Disappoint?

When I saw Google Home for the first time back at Google I/O, I was excited at the prospect of having a brainier Alexa in my home. Like others, I waited and almost forgot all about it until it was reintroduced last month when I actually could go and pre-order it.

I got my Google Home at the end of last week and placed it in the same room Alexa has been calling home for almost a year now. The experience has been interesting, mainly because of the high expectations I had.

Making comparisons with Echo is natural. There are things that are somewhat unfair to compare because of the time the two devices have been on the market and therefore the different opportunity to have apps and devices that connect to them. There are others, though, that have to do with how the devices were designed and built. I do not want to do a full comparison as there are many reviews out there that have done a good job of that but I do want to highlight some things that, in my view, point to the different perspective Amazon and Google are coming from when it comes to digital assistants.

Too early to trust that “it just works”

Like Echo, Google Home has lights that show you when it is listening. Sadly, though, it is difficult to see those lights if you are not close to the device as they sit on top rather than on the side like the blue Echo lights that run in circles while you are talking to Alexa. This, and the lack of sound feedback, make you wonder if Google Home has heard you or not. You can correct that by turning on the accessibility feature in the settings which allow for a chime to alert you Google Home is engaged.

It is interesting to me that, while Amazon thought the feedback actually enhanced the experience of my exchange with Alexa, Google did not think it was necessary and, furthermore, something that had to do with accessibility vs. an uneasiness in just trusting I will be heard. This is especially puzzling given Echo has seven microphones that clearly help with picking up my voice from across the room far better than Google Home.

The blue lights on the Echo have helped me train my voice over time so I do not scream at Alexa but speak clearly enough for her to hear even over music or the TV. This indirect training has helped, not just with efficiency, but it has also made our exchanges more natural.

OK Google just doesn’t help bonding

I’ve discussed before whether there is an advantage in humanizing a digital assistant. After a few days with Google Home, my answer is a clear yes. My daughters and I are not a fan of the OK Google command but, more importantly, I think there is a disconnect between what comes across like a bubbly personality and a corporate name. Google Assistant – I am talking about the genie in the bottle as opposed to the bottle itself – comes across as a little more fun than Alexa from the way it sings Happy Birthday to the games it can play with you. Yet, it seems like it wants to keep its distance which does not help in building a relationship and, ultimately, could impact our trust. I realize I am talking about an object that reminds you of an air freshener but this bond is the key to success. Alexa has become part of the family from being our Pandora DJ in the morning to our trusted time keeper for homework to my daughter’s reading companion. And the bond was instant. Alexa was a ‘she’ five minutes out of the box. While Google Assistant performs most of the same roles, it feels more like hired help than a family member.

Google Assistant is not as smart as I hoped

The big selling point of Google Home has been, right from the get go, how all the goodness of Google search will help Google Assistant be smarter. This, coupled with what Google knows about me through my Gmail, Google docs, search history, Google Maps, etc., would all help deliver a more personalized experience.

Maybe my expectations were too high or maybe I finally understand being great at search might not, by default, make you great at AI. I asked my three assistants this question: “Can I feed cauliflower to my bearded dragon?” Here is what I got:

Alexa: um, I can’t find the answer to the question I heard

Siri: Here is what I found… (displayed the right set of results on my iPhone)

Google Assistant: According to the bearded dragon, dragons can eat green beans…

Just in case you are wondering, it is safe to feed bearded dragons cauliflower but just occasionally!

Clearly, Google Assistant was able to understand my question (I actually asked multiple times to make sure it understood what I had said) but pulled up a search result that was not correct. It gave me information about other vegetables and then told me to go and find more information on the bearded dragon website. The first time I asked who was running for president I received an answer that explained who can run vs who was running. Bottom line, while I appreciate the attempt to answer the questions and I also understand when Google Assistant says, “I do not know how to do that yet but I am learning every day”, the experience is disappointing.

Google is, of course, very good at machine learning as it has shown on several occasions. I could experience that first hand using the translation feature Google Home offers. I asked Google Assistant how to say, “You are the love of my life” in Italian. I got the right answer delivered by what was clearly a different voice with a pretty good Italian accent. Sadly, though, Google Home could not translate from Italian back into English which means my role as a translator for my mom’s next visit will not be fully outsourced.

We all understand today’s assistants are not the real deal but rather, they are a promise of what we will have down the line. Assistant providers should also understand that, with all the things the assistants are helping us with today, there is an old fashioned way to do it which, more likely than not, will be correct. So, when I ask a question I know I can get an answer to by reaching for my phone or a computer or when I want to turn the lights off when I know I can get up and reach for the switch. This is why a non-experience at this stage is better than the wrong experience. In other words, I accept Google Assistant might not yet know how to interpret my question and answer it but I am less tolerant of a wrong answer.

Google Assistant is clearly better at knowing things about me than Alexa and it was not scared to use that knowledge. This, once again, seems to underline a difference in practices between Amazon and Google. When I asked if there was a Starbucks close to me, Google Assistant used my address to deliver the right answer. Alexa gave me the address of a Starbucks in San Jose based on a zip code. Yet, Alexa knows where I live because Amazon knows where I live and my account is linked to my Echo. Why did I have to go into the Alexa app to add my home address?

Greater Expectation

Amazon is doing a great job adding features and keeping users up to speed with what Alexa can do and I expect Google to start growing the number of devices and apps that can feed into Google Home. While the price difference between Google Home and Echo might help those consumers who have been waiting to dip their toes with a smart speaker, I feel consumers who are really eager to experience a smart assistant might want to make the extra investment to have the more complete experience available today.

We are still at the very beginning of this market but Google is running the risk of disappointing more than delighting at the moment. Rightly or wrongly, we do expect more from Google especially when we are already invested in the ecosystem. We assume Google Assistant could add appointments to my calendar, read an email or remind me of upcoming event and, when it does not, we feel let down. The big risk, as assistants are going to be something we will start to engage more with, is consumers might come to question their ecosystem loyalty if they see no return in it.

Touchscreen or No Touchscreen, That is the Question!

A lot has already been written about Apple’s Touch Bar for the MacBook Pro and how Apple should have just gone all in and actually added a touchscreen. I hinted on the day of the event that the Touch Bar could actually end up being more impactful than a touch screen and I would like to explain why.

Windows Touch Screens Were a Response to Mobile

I think it is important to look at why we have touch screens in the Windows camp.

Touch screens on Windows were not the result of a platform need. When we started to see hybrid devices running Windows, we were still on Windows 8, which was not optimized for touch. Nor were touch screens the result of an innovation aimed at changing the way we worked and interacted with content.

We got touch screens because Windows as a platform was trying to catch up to mobile.

With very little opportunity for growth in smartphones, and iPad at the high-end and cheap Android tablets at the low-end impacting PC sales, Windows PC makers wanted to fight back by adding the one function the world seemed never to get enough of. By adding touch to PCs, vendors were hoping to shift the downward trend in PC sales while decelerating tablet growth.

Then there was Surface. Microsoft started Surface because what vendors were releasing at the time was failing to compete with tablets. Consumers were not interested in buying a new PC and enterprises were still not sure they wanted to invest in the premium that touch was bringing to the new machines. Surely productivity did not need touch!

Not just about the hardware

Even Surface did not hit a home run the first time around. While it was the best hardware Windows had to offer at the time, the first iteration of Surface running Windows 8 was a less than optimal experience when using touch. The obsession of competing with the iPad was also giving way to confused products like Surface RT.

Fast forward to today and you have Surface Pro 4 running on Windows 10, offering a full computing experience in a versatile form factor with an OS that runs well with using both touch and keyboard.

Looking at hardware alone, however, is not enough to understand how far a device can go when it comes to bridging PCs and tablets. Apps have been key in tablets. So much so that the market has been clearly split in two: a high-end that is dominated by iPad, where there are over one million dedicated apps, and a low-end market where Android tablets reign supreme mainly as content consumption screens.

Windows based 2-in-1s, Surface included, suffer from the lack of touch-first apps that would help move the needle in adoption and, most of all, with engagement and loyalty. It is for this reason that seeing Microsoft invest in first party apps is so refreshing. Microsoft is delivering value and hopefully showing the potential to developers even with both apps and new devices such as the Surface Dial. In an interview with Business Insiders, VP of Microsoft Devices, Panos Panay said something I could not agree more with: “The entire ecosystem benefits when we create new categories and experiences that bring together the best of hardware and software.” 

Meanwhile, across the fence, the Mac OS store has not captured developers in the same way the iOS Store has. The prospect of being able to reach hundreds of millions vs. tens of millions of users has kept a lot of developers focusing on iPhone and iPad.

Adding touch support for macOS Sierra might have left users not much better off than they were before. I assume developing for the Touch Bar is much easier than designing a brand new app for Sierra optimized for touch, which ultimately would result in a better experience for the user.

The “I need a keyboard” argument

Clearly, Apple did not just do the Touch Bar because it was easier to develop for. Apple continues to maintain that vertical touch is not the right approach. Many disagree because the extensive use of touch is getting us more and more often to reach out to touch our screens. Yet, when we touch our screens, we generally want to scroll or select. We really do not want to do complex things which begs the question, why can’t we do it on the trackpad we have on our keyboard? We can discuss this point till the cows come home and we will find pros and cons on both sides.

So let’s look at this point a little differently. There are two main reasons why someone buys a MacBook Pro today: OS and the keyboard. Rightly or wrongly, many people still think iOS is not a “full OS” – another point we can discuss till the cows come home. But the keyboard is key.

If the keyboard is so important for these users, it seems fitting Apple focused on making that experience better. In a recent interview for CNET, Jony Ive said:

“Our starting point, from the design team’s point of view, was recognizing the value with both input methodologies. But also there are so many inputs from a traditional keyboard that are buried a couple of layers in…So our point of departure was to see if there was a way of designing a new input that really could be the best of both of those different worlds. To be able to have something that was contextually specific and adaptable, and also something that was mechanical and fixed, because there’s truly value in also having a predictable and complete set of fixed input mechanisms.”

Taking touch and contextualizing it to the keyboard to make gestures, steps, and functions more natural, immediate,and precise makes a lot of sense to me. As often with Apple, you get what you asked for but not in the form you thought you wanted it.

What Does This Mean for the Future?

For Apple, it means it is serving two different audiences that think of computing in different ways. Apple will do so for as long as it will take for MacBook users to be convinced the iPad Pro and iOS 10 represent the next computing platform.

For Microsoft, it is about focusing on the larger and longer term shift that will see Mixed Reality play a big role in the way we interact with devices, the way we do business, and the way we learn. Microsoft is making sure it is shaping its own path rather than finding itself blindsided and left to scramble as it did with mobile.

CarPlay: The Best Incarnation of Apple’s Ecosystem

Apple is making a car. The code name is “Project Titan.” Apple brings back Bob Mansfield from retirement to lead the project. Apple lays off dozens of employees who were presumably working on the car project that was never confirmed. Apple might no longer be making a car. There! You are all caught up on the months of speculation around Apple and cars!

What I do know for sure is Apple is in my car today. A new car I have had now for about 10 days. A totally unnecessary purchase justified by the fact that my old car – a 2014 Suburban – was not technologically savvy enough. Now, I have the 2016 model and it does all sorts of things for me — warning me about lane departures, making my seat vibrate when a car or pedestrian is approaching me while reversing, and showing me the direction with a big red arrow on my screen. The most interesting part, however, is having CarPlay and Android Auto.

As I am currently using an iPhone 7 Plus, I tried out CarPlay and the results are quite interesting.

I have been using Google Maps pretty much since I got to the US four years ago. My old car had a navigation system but I hated it so I was using my phone with a Bluetooth connection. I had tried Apple Maps when it first came out but went back to Google and soon got used to certain features, such as the multi-lane turn as well as the exact timing of the command. I got comfortable with it and, aside from trying out HereWeGo and Waze, I have been pretty much happy with Google.

Having CarPlay made me rediscover Maps and features like where I parked my car, the suggested travel time to home or school or the office, suggestions based on routine or calendar information — all pleasant surprises that showed me what I had been missing out. It also showed me how, by fully embracing the ecosystem, you receive greater benefits. Having the direction clearly displayed on the large car screen was better and, while there is still a little bit of uneasiness about not using Google Maps, I have now switched over. Maps on Apple Watch just completes the car experience as the device gently taps you as you need to make the turn. It is probably the best example I have seen thus far of devices working together to deliver an enhanced experience vs. one device taking over the other.

Music has been in my car thanks to a subscription to Sirius XM but, at home, we also have an Apple Music subscription as well as Amazon Prime Music. With CarPlay, my music starts to play in the car as soon as the phone is connected and, despite my husband’s initial resistance, this past weekend, he was converted. He asked Siri to play Rancid and he was somewhat surprised when one of his favorite songs came on. My daughter is also happily making requests to Siri and everybody catching a ride is quite relieved not to be subjected to Kidz Bop Radio non-stop.

The best feature, however, is having Siri read and compose text messages for you. I know I can do that outside my car as well but I rarely do, because, well frankly, I don’t have to: typing serves me just fine. When I interact with Siri, the exchange feels very transactional, i ask a question I get an answer and that is it. The car is the perfect storm when it comes to getting you hooked on voice commands. You are not supposed to be texting and driving, the space is confined, and there is little background noise as the music is turned off when you speak (I have to admit a switch to turn off the kids would be nice too). Siri (she) gets commands and messages right 90% of the time which gets me to use her more. Interestingly, it is also the time where I have a more natural, more conversational, exchange with Siri:

Siri: There is a new message from XYZ would you like me to read it to you?
Me: Yes, please.
Siri: (reads message)
Siri: Would you like to respond
Me: Yes
Siri: Go ahead
Me: Yada Yada Yada
Siri: You are replying Yada Yada Yada, ready to send?
Me: Yes

At the end, you have a pretty satisfied feeling of having achieved what you wanted and not once moving your eyes from the road ahead.

Our Voice Assistant survey did show a preference for consumers to use their voice assistant in the car. Fifty-one percent of the US consumers we interviewed said they do, so I am clearly not alone. I would argue that interacting through car speakers vs the phone – assuming you are not holding the phone to your mouth which would not be hands-free – gives you higher fidelity and therefore a better, more engaging experience.

While we wait for autonomous cars (maybe even one by Apple) to take over and leave us free to either work or play while we go from point A and B, it is understandable that CarPlay stays limited to functions that complement your driving but do not interfere with your concentration. That said, I think there is a lot of room for Apple to deliver a smarter experience in the car if it accesses more information from the car and the user. Suggesting a gas station when the gas indicator goes below a certain point, suggesting a place to park when we get to our destination, or a restaurant if we are driving somewhere where we have not been before and are close to lunch time. The possibilities are many.

The problem with CarPlay is it relies on consumers upgrading their cars to one of the over 100 models available or integrating CarPlay kits — which range from just under $200 to over $700 depending on brand and quality. This is a steep price to pay when you are not quite sure what the return on your investment will be. Apple needs to find a way to lower that adoption barrier for CarPlay so as to speed up adoption. The more users experience CarPlay, the easier it will be to get them to take the next step when it comes to cars, whether an Apple-branded car or a fuller Apple experience in the car.

Google’s Pursuit of Happiness: #MadeByGoogle

After weeks of speculations and leaks, Google finally announced its first #MadeByGoogle smartphones called Pixel and Pixel XL. Yet, the phones per se were the least interesting part of the event in my opinion. What was really interesting was Google’s focus on AI as the next platform after mobile and how, in order to be the winner in that battle, they feel the need to go deeper into hardware

Pixel is No Nexus

While the premise of the Pixel phones might be the same as Nexus – always the latest software and a pure experience – everything else points to a very different role these products will play for Google.

First, of course, the phones, made by HTC with the VR viewer Google branded. Second, Google stated they had a lot to do with the design of the phones vs the Nexus phones — which were a rebranded version of a specific marker flagship product.

The limited appeal of Nexus in the past, though, had little to do with how the phones were designed or who made them and a lot to do with how they were sold. With Pixel, Google is not just relying on the Google Store for distribution but is partnering with carriers across different markets — Verizon in the US, EE in the UK, Rogers in Canada, Deutsche Telekom in Germany — as well as key retailers such as BestBuy and Carphone. It seems quite clear Google will put a marketing budget behind these devices; something it did not do for Nexus.

I would expect sales volumes to be significantly higher than their previous Nexus products. Yes, I know that does not say much, given Nexus represented less than 1% of overall Android sales. But let’s be clear — Pixel is not designed to appeal to every Android user out there. Hence, the premium price.

The little pitch about the new feature that allows users to easily switch OS’, including helping you port iMessages, was a nice giveaway of who Google is hoping to capture. Before we get to iPhone users, though, I think an easier target is high-end Android customers, most of whom are using a Samsung Galaxy S phone now.

With Pixel being the first Daydream-ready phone and, most likely, the only one shipping in time for the holidays, Google adds VR as a cherry on the cake and will give the Daydream View free with preorders as well as pricing it $20 less than the Samsung Gear VR. This is more bad news for Samsung who has a clear head start in VR but will now face more competition and might need Facebook to be more of an enabler when it comes to content so as to compete with Google’s YouTube. Of course, YouTube VR access will not be exclusive to Daydream-ready devices but we could certainly see dedicated content for them as the user base grows.

Google is best on Pixel

Pixel is also clearly aiming at turning Android users into Google users with Pixel, Google Assistant, Allo, and Photos brought to the forefront so the user is not just engaged with the OS but engaged with Google more and more. There are many Android phones out there but only two Google phones for now. With this focus, Google might not need to make Android proprietary in order to control it.

If you think about the Android smartphone market now, you have Samsung and Huawei as the main player at a worldwide level. Then, you have a bunch of brands that are strong players in some markets. Then another group that is very localized. With Pixel, Google is playing a similar game as Microsoft is with Surface; they just seem to be less shy about it. So Google still needs all the other Android makers to churn out devices because not all Android users are interested in Google services or have access to those services or are actually valuable to Google. Google was quite careful at the end to talk about Google being best on Pixel vs Android, aside from the mention of being able to run the latest version. While Nexus was the purest Android experience, Pixel is the best Google experience. It will be interesting to see how the services differ on other devices going forward, if at all. Today, for instance, we heard that Pixel comes with Google Photos built in and offers free unlimited storage at full resolution for both pictures and videos.

Moving to an AI-First World

Before we got to the devices announcements, Sundar Pichai set the scene as to what the next battleground will be: AI. Google Assistant comes embedded in Pixel and in November, could enter our living room through Google Home. Pichai explained how “Google for everyone” will become “Google for you” (and you and you and you and you). Google Assistant will be tailored to you and your individual needs. Some may wonder why Google is actually bringing Google Home to market when there is an army of phones out there that could run the Assistant. Well, first of all, the army is not ready. Most of those phones run on old software that does not support Google Assistant. Although, if running Lollipop and up, you could have Allo running on your phone. Second, Google Home might convert some Apple users who are not quite ready to give up their iPhone yet but might be intrigued by a home device. Third, as Echo taught us, a voice-only device might get us to embrace our assistants faster and more deeply. Considering the aggressive price of $129, Google sure wants to sell a lot of these.

The demo was impressive from a conversational and range of knowledge perspective. The latter should not be a surprise, given Google has search in its DNA. While this less transactional exchange might make Google Home quite appealing, some of the use cases will be limited to the user who owns the account it is paired with.

You heard me talk in the past about the role personal assistants might have. When Google Home was introduced at Google I/O, I thought Alexa was depicted more as a Mary Poppins and Siri was more and more like Jarvis – especially now that she can whisper in my ear through AirPods. If Home can only be associated with one account, for now, it means it will only know about one person’s calendar appointments, shopping list, etc – pretty much everything but music as Google Home supports multiple music accounts. I am not sure if this setup decision plays to “the head of the household”, which seems to fit with women not generally being interested in interacting with Alexa according to our research. Yet, if you ask most families when it comes to family routine, calendars, homework and the like, it is really not the dad who is in charge – and yes I know I am stereotyping. So even Google Home seems to be more like Jarvis than Mary Poppins. It raises the question of, why do I need it in the home vs on my phone? Maybe this is why we have Pixel with an embedded Assistant and phones that run Allo.

There was more #MadeByGoogle at the event like the 4K Chromecast and Google Wifi but they were completing the picture rather than making it. What was absent was the rumored new platform mashing together Android and Chrome codenamed Andromeda. Though, given the focus of the event, such an announcement would have felt very much out of place. We’ll see if Andromeda will surface closer to Google I/O in 2017. There is a lot to digest from today and many questions will be answered when the devices start shipping later in the year.