Inside the Mind of a Hacker

Writing about security is kind of like writing about insurance. As a responsible adult, you know it’s something you should do every now then, but deep down, you’re really worried that many readers won’t make it past the second sentence. (I hope you’re still here.)

Having recently had the privilege of moderating a panel entitled “Inside the Mind of a Hacker” at the CyberSecurity Forum event that occurred as part of CES, however, I’ve decided it’s time. The panel was loaded with four smart and opinionated security professionals who hotly debated a variety of topics related to security and hacking.

Speaking to the theme of the panel, it became immediately clear that the motivations for the “bad guy” hackers (there was, of course, a brief, but strong show of support for the white hat “good” hackers) are exactly what you’d expect them to be: money, politics, pride, power and revenge.

Beyond some of the basics, however, I was surprised to hear the amount of dissent on the topics discussed, even by those with some impressive credentials (including work at the NSA, managing cyber intelligence for Fortune 500 companies and government agencies, etc.). One particularly interesting point, for example, highlighted that hackers are people too—meaning, they make mistakes. In fact, thankfully, apparently quite a lot of them. While in retrospect that seems rather obvious, given the aura of invincibility commonly attributed to hackers through popular media, it wasn’t something I expected to hear.

Another key point was the methodology used by most hackers. Most agreed that the top threat is from phishing attacks, where employees at a company or individuals at home are lured into opening an attachment or clicking on a link that triggers a series of, well, unfortunate events. Even with up-to-date anti-malware software and security-enhanced browsers, virtually everyone (and every company) is vulnerable to these increasingly sophisticated and tricky attacks. However, several panelists pointed out that too much attention is spent trying to remedy the bad situations created by phishing attacks, instead of educating people about how to avoid them in the first place.

Looking forward, the rapid growth of ransomware, when companies or individuals are locked out of their systems and/or data until a ransom is paid to unlock it, was one of the panelists’ biggest concerns. Attacks of this sort are growing quickly and most believe the problem will get much worse in 2017. In many cases, organized crime is behind these types of incidents, and with the popularity of demanding payment in bitcoin or other payment methods that are nearly impossible to trace, the issue is very challenging.

Another concern the panel tackled was security issues for Internet of Things (IoT) devices. Many companies getting involved with IoT have little to no security experience or knowledge and that’s led to some gaping security holes that automated hacking tools are quick to find and exploit. Thankfully, the group agreed there is some progress happening here with newer IoT devices, but given the wide range of products already in market, this problem will be with us for some time. One potential solution that was discussed was the idea of an IoT security standard (along the lines of a UL approval), which is a topic I wrote about several months back. (See “It’s Time for an IoT Security Standard”)[pullquote]There are few if any things that can be completely blocked from hacking efforts, but huge progress could be made in cyber security if companies and people would just start actually using some of the tools already available.”[/pullquote]

Another potential benefit could come from improved implementations of biometric authentication, such as fingerprint and iris scans, as well as leveraging what are commonly called “hardware roots of trust.” Essentially, this provides a kind of digital ID that can be used to verify the authenticity of a device, just as biometrics can help verify the authenticity of an individual. Both of these concepts enable more active use of multi-factor authentication, which can greatly strengthen security efforts when combined with encryption, stronger security software perimeters, and other common sense guidelines.

As the panel was quick to point out, there are few if any things that can be completely blocked from hacking efforts. Nevertheless, huge progress could be made in cyber security if companies and people would just start actually using some of the tools already available. Instead of worrying about solving the toughest corner cases, good security needs to start with the basics and build from there.

Why Tech Leaders can’t Succumb to a Presidential Bully Pulpit

Merriam-Webster defines a “bully pulpit” as:

Bully pulpit comes from the 26th U.S. President, Theodore Roosevelt, who observed that the White House was a bully pulpit. For Roosevelt, bully was an adjective meaning “excellent” or “first-rate”—not the noun bully (“a blustering, browbeating person”) that’s so common today. Roosevelt understood the modern presidency’s power of persuasion and recognized that it gave the incumbent the opportunity to exhort, instruct, or inspire. He took full advantage of his bully pulpit, speaking out about the danger of monopolies, the nation’s growing role as a world power, and other issues important to him. Since the 1970s, bully pulpit has been used as a term for an office—especially a political office—that provides one with the opportunity to share one’s views.

Roosevelt’s use of this term as an adjective and not a noun made the bully pulpit term OK for the time and, if the person using that pulpit for good, the term can be an endearing one. However, I am not sure we can see President-Elect Trump in that light yet, given his history of “blustering and browbeating” people to get his way.

I took a call from a reporter last week who was asking me about Apple’s decision to have their servers in a single data center location instead of at each of the major data centers they have around the US and the world. This will be done in Arizona and the reporter asked if Apple did this to help get a better position, in Trump’s eyes, by doing the manufacturing in the US. All told, it will only add 10-20 jobs and I told the reporter this was more strategic and had nothing to do with wanting to gain favor with Trump.

But other companies, such as Ford and Carrier, have made decisions to move jobs from planned facilities outside of the US back to America. On the surface, it does appear Trump “bullied” them into doing it. It seems very clear to me that Jack Ma, CEO of Alibaba, who met with Trump at Trump Tower and pledged to bring one million jobs to the US, had being in Trump’s good graces in mind.

Last week, Amazon announced they would add 100,000 jobs in the US. When this was announced, and because of Trump’s bully pulpit, I was asked by reporters if this decision was because of pressure from Trump or something more related to strategic growth.

I would hope it was because it was a strategic decision but I have a sneaky feeling Amazon and many others do not want to rile Trump. What he says and does from his “bully pulpit” could hurt them during his time in office. Let’s be clear: I am 100% behind creating more jobs in the US but I believe this should come as result of great business conditions, innovation, a true need for these companies, and that it is strategic to their business growth. I also believe they should not be doing it because they were bullied into it. I am of the school that believes bullying them to create jobs may be a temporary fix. Unless it’s done with the right motive, conditions, and strategy, it will not deliver the fundamental change needed for these jobs to be long lasting.

I believe strongly the tech industry and companies should not succumb to the bullying tactics of President-Elect Trump in any way when it comes to the issue of strategic planning, growth, innovation, and even jobs.

That does not mean they should not want to work with him and, when necessary, lobby to influence Mr. Trump’s policies so he and his administration do not stand in the way of growing our tech economy. But, if any of their moves are done just to placate Trump, then they are building foundations that will crumble under the weight of forced motivations. Unless strategic to their growth, it will set them back, not move them forward.

In a recent piece I did for Fast Company, I outlined my involvement with a council of independent tech influencers that helped shape President Bush’s tech agenda. In the article, I suggested some of the types of councils I believe President Trump needs to help him understand tech and, more importantly, use them to help develop a tech agenda of his own that would benefit his economic goals and get these companies to help support an agenda that moves our industry forward.

I believe working with President Trump in a civil, proactive manner should be the goal of every tech company but not kowtowing to him because he bullied them into some action. The tech industry needs the resolve to stand up against any bully pulpit and only do what is right for them to grow their market. Anything less than that won’t have a lasting impact on them or our industry.

Podcast: CES 2017 and Detroit Auto Show Autonomous Cars

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss car technology announcements from CES 2017 and the NAIA 2017 Detroit Auto Show, as well as the regulatory, technical and business challenges facing autonomous cars.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Why I’m Optimistic About the Future of Cars

I spent last week at CES and a couple of days this week at the North American International Auto Show in Detroit. In both cases, I spent a lot of time listening and talking to carmakers and others in the industry. What I’ve come away with from these two weeks is a lot of optimism about the future of cars for several different reasons.

Both the industry and outsiders are pushing change

The biggest reason I’m positive about the future is both the legacy industry players and newcomers and outsiders are pushing for change. There’s nothing more frustrating than seeing an industry where lots of good ideas are coming from outside and they’re all being squashed by the incumbents – we’ve seen this happen in the music industry, the PC industry, and we’re still seeing it in the TV industry. Though there has been some resistance in the past to the big shifts facing the automotive industry, almost all the major carmakers are accepting of the new realities and, in many cases, actively embracing the three big shifts: electrification, autonomous driving, and new ownership models.

The carmakers are actually engaging in their own efforts around autonomous driving and car and ride sharing. In the vast majority of cases, they’re also embracing electrification as one of several powertrain technologies. None of this is to say these companies will end up owning all of this themselves – at the very least, the disrupters from outside the industry and newcomers like Tesla have pushed the incumbents to innovate faster and they may well end up owning some of the end result too. But I heard from company after company about their investments and experiments in a variety of car and ride sharing models, even in urban mobility projects which don’t involve cars at all, such as bike and bus programs.

There is realism about challenges, at least behind closed doors

At both the shows I’ve attended in the last two weeks, there have been lots of high profile proclamations about the glorious future we’re all headed to, many of them with specific timelines attached. Looking at the headlines that result from these statements, it’s easy to despair at a lack of realism from many of the companies involved. Claims about fully autonomous vehicles rolling off production lines as soon as 2021 seem absurd on the face of them but, when you dig beneath the surface and talk to the actual engineers behind the technologies, you get a sense of nuance that’s often missing from those public proclamations.

What I found this week in particular was the carmakers are incredibly realistic about the very real challenges involved in bringing autonomous vehicles to market. There is definitely a headline-grabbing push to establish leadership in electrification and autonomous driving but those actually working on the technologies will tell you about all the complexities and challenges that exist. The real plans of the major carmakers around these topics are far more realistic about the actual timelines, which are much further out than the headlines would lead you to believe, at least for full-time Level 5 autonomous driving without geographic limits. When it comes to electrification, there are also far more sanguine views about the effect of current low gas prices on demand for EVs, the need for more charging infrastructure, and the limits of current battery technology. That realism is a good thing, because it means that, even as these companies embrace change, they’re going to do it in a way that prioritizes safety and the customer experience.

The future looks exciting

At the end of the day, I’m most optimistic because the future of cars looks generally very positive. Tesla has already shown us both the enormous potential for high-performance electric cars and for limited autonomous driving. I used Uber and Lyft extensively over the last two weeks and those services, and many others around the world, demonstrate the potential for far lower car ownership and more flexible mobility models. What I saw at NAIAS this week also reassured me we’re going to get great technology from the incumbent carmakers when it comes to all three of the major shifts, including increasingly high-performance electric and hybrid vehicles and assisted driving technology that helps pave the way for future autonomous driving technologies. We, and especially our children, are going to be able to drive (or be driven by) cars which are much safer, more comfortable, more connected, and better for the environment than the ones we drive today. The competition between the legacy industry and a whole variety of new players is pushing both sides to move faster in delivering that reality. That’s going to be good for all of us.

Move Over IoT, AI is the New Hot Acronym

I survived CES 2017 to tell this story. From the very first day of press conferences and CES Unveiled, it was obvious connected-anytime-anywhere was what we were going to see at the show. It soon became clear that, if we played the drinking game for every time AI was mentioned by a vendor, we would not be sober for very long. Press conference after press conference, pitch after pitch, Artificial Intelligence was mentioned as a key trend for 2017 and one vendors were working on. Although in most cases, there was nothing concrete to see linked to the product they were announcing.

Everything Is Connected Even When It Should Not

A quick Google search pins the origins of the Internet of Things to Peter T. Lewis back in 1985 when he gave a speech at a FCC-supported conference where he defined IoT as “The integration of people, processes and technology with connectable devices and sensors to enable remote monitoring, status, manipulation and evaluation of trends of such devices.”

Over the past two to three years, IoT has started to materialize as more and more devices were connected to the internet. How this is developing might be different than how vendors intended it a few years ago. If we rewind a little, after smartphones, many thought wearables were going to be the most pervasive devices, making humans the most connected ‘things’ of the Internet of Things. As it turned out, wearables penetration is ramping up very slowly and it seems the focus of the vendors and the attention of the consumers have both shifted to connecting our homes.

What gets connected, however, is questionable. Not everything that is connected should be. Sometimes, even if a device is connected with good reason, the value of that connection is not immediately clear to the customer. In 2013, the HAPIFork took CES by storm. The connected fork let users know when they ate too much or too fast and was one of the most talked about device of the show. In 2017, one of the most talked about devices at CES was the L’Oréal Smart Brush. Developed with Withings, the Kérastase Hair Coach uses sensors and a microphone to count hair strokes and listen for hair breakage. Aside from the brush, I saw connected showers promising to keep your water warm (and save you water and money as well), windows that can sense when the air is getting too stuffy and open on their own, smart locks, voice activated garbage cans and so much more. There was so much connected “stuff” that I started to mentally file gadgets into three categories — Tech for the sake of tech, tech for the sake of lazy, and tech for the sake of humanity.

Sadly, I saw more gadgets and solutions falling into the first two categories than into the last. There were many products searching for a problem to solve and many that offered to replicate something we already do today with just less effort on our part. What was interesting, however, was the common underlying selling point was the smartness, not the connectivity.

IoT Is No Longer Cool for Consumers

I came away from CES with the clear impression that, although we are still talking about the Internet of Things, vendors, and more importantly PR gurus, have moved on from the connectivity part to the brain part of the devices.

While enterprise is still very much talking about IoT as it looks to empower, manage and, most of all, monetize all these devices, it was as if the term was tired when it came to consumers. It could also be for consumers, being connected is a given nowadays and it is the value of connectivity that needs to be highlighted in order to drive a premium. Unfortunately, not everything that is connected is necessarily smart and not everything that is smart is necessarily intelligent. As I talked to people, I noticed the line between these three concepts was very blurred and, in most cases, the blurring was quite intentional.

For me, a smart device is not only a device that is connected to the internet and/or other devices but one that can interact with other devices and enable some degree of autonomy. In order to be smart, a device does not necessarily need Artificial Intelligence, at least if you think of AI in terms of a device that mimics “cognitive” functions that humans associate with other human brain such as “learning” and “problem solving. This does not make the device less smart or less useful. To give you an example, let’s look at the iPhone 7 Plus camera experience. The iPhone 7 Plus takes great pictures thanks to the two lenses, the software, the sensors, and the processing power. AI or, more specifically, Machine Learning, only comes in to deliver the portrait effect by recognizing where the subject of the picture ends and where the background begins. Think about maps as another example. The Estimated Time of Arrival (ETA) we are given when we set a destination comes from a series of data from average speeds, actual travel time, traffic prediction, speed limits, and historical averages. These are all combined to get a projection of your ETA between two points. AI is what makes it possible for my phone to analyze my commute pattern and, as I connect to the car at a given time of day, offer me the ETA to the most likely location — my daughter’s school in the morning, the office after that or the Karate Dojo in the afternoon.

The Risk of AI-Washing

Internet of Things, Artificial Intelligence, Machine Learning are all trends that will develop over time, not over-night. While the temptation of keeping things fresh might get vendors to chase the next buzzwords, I think it is very risky to do so. Talking prematurely about features and capabilities might just bore consumers sooner rather than excite them. Talk about what your products can deliver. Better yet, showing what they deliver is more effective than labelling features with sexy buzzwords. Don’t tell your users your device uses AI, show them what your device can actually do and, if it looks like a bit of magic, let them think this is how you do it. Sometimes, talking too much about what is under the hood might raise more questions than you have answers for — especially when it comes to security and privacy – something we should all concern ourselves with as everything gets connected and smart around us.

Takeaways from CES 2017

By now you’ve undoubtedly read or viewed several different CES stories across a wide range of publications and media sites. So, there’s no need to rehash the details about all the cool, crazy, or just plain interesting new products that were introduced at or around this year’s show.

But it usually takes a few days to think through the potential impact of what these announcements mean from a big picture perspective. Having spent time doing that, here are some thoughts.

The impact of technology on nearly all aspects of our lives continues to grow. Yes, I realize that seems somewhat obvious, but to actually see or at least read about the enormous range of products and services on display at this year’s show makes what is typically just a conceptual observation, very real. From food to sleep to shelter to work to entertainment (of all kinds!) to health to transportation and beyond, it’s nearly impossible to imagine an activity that humans engage in that wasn’t somehow addressed at this year’s show. Having attended approximately half of the 50 CES shows that have now occurred, the expanding breadth of the show never ceases to amaze me. In a related way, the range of companies that are now participating in some way, shape, or form is surprisingly diverse (and will only increase over time).

Software is essential, but hardware still matters. At the end of the day, it’s the experience with a product that ultimately determines its success or failure. However, when you’re surrounded by the products and services that will drive the tech industry’s agenda for the next 12 months, it’s immediately clear that hardware plays an enormously critical role. From subtle distinctions like the look and feel of materials, to the introduction of entirely new types of tech products, the importance of hardware devices and key hardware components continues to grow, not shrink (as some have suggested).

What’s old can be new again. Though TVs and PCs may sound like products from a different era to some, this year’s show once again proved that the right technological developments combined with human ingenuity can produce some very compelling new products. Even long-forgotten technologies like front projection can be transformed in ways that make them very intriguing once again. Plus, it’s becoming increasingly clear that, just like the fashion and music industries, the tech industry is developing a love affair with retro trends. From vinyl to Game Boys and beyond, it seems many types of older tech are going to be revisited and renewed.

We are on the cusp of some of the biggest changes in technology that we’ve seen in some time. The integration of “invisible” technologies that we can’t directly see but still interact with is going to drive some of the most profound developments, power shifts, and overall restructuring that’s ever occurred in the tech industry. Oh, and it’ll make for some incredibly useful and compelling new product experiences too.[pullquote]The integration of “invisible” technologies that we can’t directly see but still interact with is going to drive some of the most profound developments, power shifts, and overall restructuring that’s ever occurred in the tech industry.”[/pullquote]

Voice-control will certainly be part of this, but there will be much more. In fact, the range of new products and services, as well as enhancements and recreations of existing products and services that AI, deep learning, and other advanced types of software technologies can enable in combination with sensors, connectivity, and powerful distributing computing is going to be transformational. Sure, there’s been talk of adding intelligence to everything for quite some time, but many of the announcements from this year’s CES demonstrate that this promise is now becoming real.

Finally, trade shows still matter, even in tech. Yes, virtual reality may one day provide us with the freedom to avoid the crowds, hassles, and frustrations of trekking to an alternative location, and seemingly everyone who goes likes to complain about attending CES, but there’s nothing quite like being there. From serendipitous run-ins with industry contacts, to seeing how others react to products and technologies you find interesting, there are lots of reasons why it’s going to be difficult to completely virtualize a trade show for some time to come.

The Unintended Consequences of a Single Design Decision

Being involved in the design and development of consumer products, I’ve seen how a single design decision can have huge unintended consequences and change an entire industry — for the better or the worse.

As a positive example, when Apple decided to design notebooks using aluminum housings and abandon the industry’s use of plastic with ugly vents and screws, they created a huge industry of automated machining of solid aluminum blocks. That industry has now made it possible for other notebooks to use the same processes to create their own products.

Another example is when Apple decided that thinness was a major goal for its mobile products. The unintended consequences have had a huge impact, likely beyond the original intention, but one that’s impacted performance, features, user satisfaction, and the entire industry.

Some of those consequences are:

Shorter battery life – Making phones and notebooks as thin as possible and then making them even thinner in each subsequent generation resulted in less volume for batteries. But because the one dimension that reduces a battery’s capacity most is its thickness, battery life of iPhones and MacBooks have suffered. Battery life of iPhones and the latest line of MacBook Pros are well below expectations and are one of the major user complaints. So much so, the battery indicator no longer displays time left. And, since a battery’s life is based on the number of charging cycles, smaller batteries need more recharging cycles, resulting in a shorter life.

Fragility – The thinness of iPhones has resulted with the iPhone 6 and 6 Plus actually bending in normal use and the need for protective cases. Samsung has shown, with their Galaxy S7 Active, that a phone can be made with a rugged, waterproof enclosure that’s only a few millimeters thicker. Speaking of Samsung, there’s even speculation that their problem with the Note 7 phone catching on fire was a result of trying to beat Apple in the thinness competition.

Reduced number of ports – With thinness comes the need to remove many of the legacy ports designed for thicker products. While leaving them out makes it possible to reduce thickness, it requires carrying more dongles to connect to our other devices.

Loss of features – iPhones still don’t have NFC and wireless charging, likely a result of insufficient space. Magsafe, one of the most innovative features ever created for notebook computers, has been eliminated to make the new MacBook notebooks thinner. With its removal is the loss of the battery charging indicator.

Typing errors – Thinness has led to notebook keyboards with reduced performance compared to the iconic keyboards used in products like the ThinkPad. Key travel has gone from 3 mm to under 1 mm, causing more errors.

What’s ironic is these consequences might have just resulted from an Apple executive saying. “I want our products to be as thin as they can be”, walking away, and then everyone taking the person literally. How likely is it that, when that request was made, anyone was thinking of any adverse outcomes? Well, perhaps a few engineers that were told not to be negative and be a team player.

The lesson is that an arbitrary goal for a product’s requirement can have far-reaching effects on the company’s products, as well as an entire industry, and few may be aware of that when it all began.

Dancing the Tango: Augmented Reality in Your Hands

Head-mounted augmented reality devices such as Microsoft’s HoloLens and Meta’s Meta 2 show what high-dollar, cutting edge AR hardware can do, but the fact is few consumers will be buying these products anytime soon. I wrote about Lenovo’s Phab Pro 2, the first production Android phone with Google’s Tango handheld AR technology when the company announced it in June 2016. In November Lenovo started shipping the product, and since then I’ve been testing it. Tango as a technology has a long way to go, but using this phone, it’s hard not to recognize the possibilities it represents. Phone-based VR such as Google’s Daydream is interesting. Phone-based AR, done right, could change how we interact with devices, information, our surroundings, and even each other.

Big Battery: Big Phone
I won’t dive into all the speeds and feeds of the Phab Pro 2 except to note that it has a massive 6.4-inch screen, a 4050 mAH battery, and a custom Qualcomm Snapdragon 652 chip. Tango likely made the processor and big battery necessities; I suspect the large screen was a pocket-busting side effect of the battery requirements. The result is a phone so large it makes Apple’s sizeable iPhone 7 look positively tiny. As a phone, it is simply too big. But as a Tango test machine, the large screen works out well, as it gives the apps a larger canvas with which to work. Which points to what will be an ongoing issue with handheld AR: You need a big screen to drive a reasonable experience, but too big and nobody will want to carry it. Foldable screens, anyone?
Crash-Prone Apps

The Phab Pro 2 has a Tango icon front and center that launches the section of the Google Play store devoted to Tango apps. At present, there aren’t very many apps to choose from, and most are relatively rudimentary. And some are just plain goofy. Obviously, it’s hard to incentivize app developers to create Tango-based apps when there’s just one hardware product shipping. In the meantime, there are a few apps worth discussing.

Google’s own Experience Tango app creates a fantasy world filled with unique creatures and plants that spring into existence as you walk around the actual space you are in. It’s a neat demonstration of the technology, but it’s the type of app you load exactly one time.

Dinosaurs Among Us is more interesting in that it lets you drop moving digital representations of extinct creatures into your physical reality. So, for example, I placed a Caudipteryx into my living room, standing on the rug. Then I took a video of my son walking around the critter. Cool.

The Lowe’s Vision app lets you do practical chores such as measuring an object or a space. It proved fairly accurate in my tests. The app also lets you drop Lowes products such as chairs and tables into your living room to see how they look and if they fit (Wayfair has a similar app). You can even take a picture of the different products in your house and then solicit feedback from friends and family.

Finally, I spent a fair amount of time playing with the BikeConfig AR app creating custom-built bicycles I can’t afford to buy in the real word. Unlike a static image, it is possible to walk around and inspect these bikes from every angle.

Unfortunately, extended periods of AR usage clearly impact the phone, which gets warm to the touch under heavy load. And pretty much without fail each time I’d be enjoying an experience on the phone, the Tango app I was using would hang, or throw up an error message noting the Tango Core crashed. Six months ago Tango was a Google project; today’s it part of a shipping product, but in real-world use, it’s still far from ready for prime time.

Future Potential is Clear
While my day-to-day experience using Tango was mixed at best, and today’s apps are primarily gimmicks or one-trick ponies, the technology’s potential is clear. Handheld AR will make lots of interesting things possible. From indoor navigation to object recognition; knowledge transfer to collaboration; and shopping to entertainment. The downside, of course, is that if you think everyone is already doing too much walking around staring at their smartphones, this will only make things worse.

This week at CES ASUS announced its ZenFone AR Tango phone (it also supports Daydream VR). I expect to see more vendors announce similarly enabled phones at Mobile World Congress later this year. And, of course, there’s plenty of speculation that Apple may enter the handheld AR space with its next iPhone launch. At some point handheld AR will move from an interesting technology to one that people will expect their phone to have, the biggest question now is exactly when it makes that leap.

This is What You’re Missing about Vocal Computing

On Christmas morning, as my mom and I hurriedly rushed around my kitchen making final preparations, a third voice would occasionally interject into our conversations. Sitting at my counter was Alexa, helping me through the process by answering questions, setting timers and even flipping on holiday music at our request.

I’ve been living with Alexa for roughly two years now and have grown accustomed to our constant banter. But, for my mom, it’s still a very new and novel experience. When my mom speaks to Alexa, she might recognize she’s speaking to a computer, but she probably doesn’t consider that computer is actually thousands of miles away and she probably doesn’t realize the way we talked to that computer on Christmas morning is the new face of computing.

The graphical user interface (GUI) wasn’t new when it was introduced in 1981 by Xerox and popularized to the masses in 1984 by Apple’s Macintosh computer. A GUI didn’t represent a new technical way of computing but it was a crucial evolution in how we interact with computers. Think of the impact the GUI had on how we used computers and what we used computers for. Think of how it changed our conception of computing.

The smartphone was created in the 1990s but it wasn’t until 2007, with the advent of Apple’s iPhone, that smartphones reached an important inflection point in consumer adoption. Today, 75 percent of U.S. households own a smartphone, according to research from the Consumer Technology Association (CTA).

The touchscreen interface represented the next paradigm shift in computing, ushering in a new way of thinking about computing and bringing into existence new applications.

Smartphone computing shares an important heritage and legacy with the GUI introduced in the early 1980s. If you’re old enough to remember computing before GUIs, can you imagine computing on a smartphone using command prompts? GUIs in the era of desktops improved computing. It was the transformation to a graphic interface that ultimately launched the smartphone era of apps.

Vocal computing will do the same thing for the future of computing. Vocal computing isn’t perfect. Alexa isn’t always certain what I’m asking. Google Home doesn’t always provide an answer. Siri can’t always help my sons when they ask complex questions. Like a first date, we are still learning each other.

Software layers and form factors change our computing experience. We’ve seen this throughout the history of computing – from the earliest mainframes to the computers we call phones and carry in our pockets. In all the same ways, vocal computing is just an extension of what we already know – it’s a more natural and intuitive interface.

Let’s not overlook just how transformative this new interface can be. Imagine someday computing on our bikes, in our cars, while we are walking or lying in bed. With voice, every environment can be touched by computing.

With TV, Tech isn’t the Problem

Along with much of the Tech.pinions team, I’m at CES this week and, as usual, TVs and TV-connected devices are a big theme. Streaming video providers are also present and making announcements of various kinds. Yet I’m struck again this week, as I have been before, by the fact technology isn’t really the biggest challenge in disrupting the traditional TV and video industry. Yes, there are advances being made in technology which are improving the user experience of watching video, but it’s content rights that are still the biggest barrier to really giving consumers what they want.

Working with – and around – the current system

A lot of the TV-related technology on display at CES either works with or around the current system. An increasing number of connected TV devices are incorporating some kind of over-the-air element. I saw boxes and other hardware from Mohu, Sling, and others designed to capture OTA broadcast signals and incorporate them into a next-generation user interface. This is hardly dramatic new technology – broadcast has been around for decades – but it’s often still the easiest way for cord cutters to access sports and local content.

It’s ironic we’re falling back on older technology to supplant newer coax, fiber, and satellite-based delivery, but this is the state of the TV industry today. Some of the best options simply have to work with what they’ve got – that’s an admirable reality but it often means disjointed experiences which combine OTA signals with internet-delivered streams, multiple user interfaces, and local or cloud-based storage. The new devices on offer at CES attempt to bring some harmony to all this, including the Mohu and Sling hardware devices I mentioned. But, in many cases, these solutions merely cobble back together bundles that end up looking very similar to what they’re replacing. And of course, OTA solutions don’t work for some people at all (I have a big mountain sitting between my house and the local broadcasters, meaning I get no signal at all).

Rights remain the biggest barrier

I’ve been using AT&T’s DirecTV Now since it launched late last year and I’ve largely been enjoying it, though I’ve seen a few technical hiccups here and there. But there are several non-technical things that detract from the experience – TV Everywhere authentication as offered by traditional pay TV services is a bit lacking and commercial breaks often display a “commercial break in progress” placeholder rather than actual commercials. The latter doesn’t bother me overly much, but both of these are entirely down to rights issues. Contracts signed years ago haven’t yet been renewed, so AT&T doesn’t have the rights in some cases to do its own ad insertion or to authenticate users on this service for TV Everywhere apps.

Hulu made some news at CES because it has apparently signed CBS as part of its pay TV replacement service. The fact that a single broadcaster signing on is news is more evidence of how fragmented this whole space is and how important rights negotiations are. The reality is that, even if people balk at the high prices of traditional pay TV services, they still want a lot of the content and that means paying directly or indirectly to access it. CBS has its own digital streaming service – CBS All Access – and has been a holdout from several of the other streaming services including DirecTV Now. Hulu getting CBS on board is therefore something of a coup but we’ve yet to see what other content it has secured and what its reported $40 per month price will include.

The reality is that some of this is merely a matter of contract renegotiations and will get worked out in the coming years, while other elements are down to content owners deliberately resisting or blocking some of the changes to the traditional business models. The major traditional pay TV providers are part of this picture too, though of course Sling and DirecTV Now come from two of the biggest. Cable operators have been the slowest to embrace this change, largely because they dominate the historical market.

User interfaces and video quality can still help

Having said all that, we’re still seeing innovation around user interfaces and video quality and they are making a difference even as the rights issues get worked out. Some of the new streaming pay TV services have much better UIs than the services they’re replacing. Interactive programming guides still often make an appearance, but search, recommendations, and on-demand options make these interfaces more compelling. In addition, we’re seeing innovation around content formats like 4K and HDR, from both TV manufacturers and content providers. Here, too, the newer over-the-top services are taking the lead, with Netflix and Amazon offering some of the first mass market 4K content. But some of the pay TV providers are dabbling with 4K too, and even Samsung is now going to be selling 4K TV through its smart TVs.

A tipping point is coming

At some point in the next year or two, I predict we’ll see a tipping point when it will become apparent to everyone, including the current holdouts, that digital delivery is the future and that it’s coming far faster than many of them thought. We’re already seeing the mainstreaming of streamed pay TV services, with the DirecTV Now launch just the first of a new raft of services, to be joined by Hulu, Amazon, and YouTube in the near future. But we’re also seeing accelerated cord cutting (with the pay TV industry losing well over a million subscribers per year at this point) and many individual cable networks losing subscribers at a much faster rate due to skinnier bundles and rising rights costs. All of this, taken together, will cause a crisis in the TV industry which will finally drive it to embrace new business models and broader distribution. And then the rights side of the equation will finally catch up with the advances in TV technology.

What Apple’s Acquisitions in 2016 Tell Us about 2017 and Beyond

There is a lot of speculation about the “iPhone 8” and what Apple should be focusing on in 2017 in order to stay ahead of the game or, for some, barely keep up with competition. Despite some safe bets on the new iPhone features that can be extrapolated from supply-chain clues, guessing, even correctly, what Apple will do is almost as unlikely as winning the lottery. I thought, however, that looking at the 2016 acquisitions would give us more than a clue as to where Apple will focus in the future and I share my wish list of what I would like to see come out of Cupertino.

What Apple Acquired in 2016 (that we know of)

Emotient is a startup that uses artificial-intelligence technology to decipher people’s emotions by analyzing their facial expressions. The technology can be used for a number of things including detecting pain, reading reactions to content or situations we are exposed to – think advertising and retail. Emotient had been granted a patent for a method of collecting and labeling as many as 100,000 facial images a day that can be used to teach computers to better recognize facial expressions.

LearnSprout is a San Francisco-based startup focusing on tools that help teachers monitor students’ attendance, grades, and other school activities through easier access to school information systems. One of the purposes of collecting such information and making it available to teachers was to help identify at-risk students.

Flyby Media is a company that worked with Google on Project Tango. Flyby Media developed technology that allows mobile phones to see and scan, through the camera, the world around them. The company’s website also said they were developing the next generation of consumer mobile-social applications that connect the physical and the digital worlds.

LegbaCore is a firmware security company that specializes in “digital voodoo” or security at the deepest and darkest levels of computer systems. Apple was first exposed to them as they were battling Thunderstrike 2, the first super-worm to successfully attack Macs.

Carpool Karaoke is a popular show Apple licensed 16 episodes of and is to be produced (but not hosted) by James Corden as well as Ben Winston, the “Late Late Show’ executive producer. Tim Cook and Corden kicked off Apple’s September event with a special edition of Carpool Karaoke.

Turi is a machine learning and artificial intelligence startup focused on tools that help enterprises make better sense of data. Turi also enables developers to build apps with machine learning and artificial intelligence capabilities that automatically scale and tune.

Gliimpse is a Silicon Valley-based company that built a personal health data platform that enables any American to collect, personalize, and share a picture of their health data. The focus was particularly around cancer and diabetes patients.

Tuplejump is an Indian-based machine learning company specializing in software that processes and analyzes big sets of data quickly.

Indoor.io is a Finnish company focusing on indoor location and mapping.

Acquisitions Show Clear Areas of Focus but How It Will Materialize is Still Unclear

If you look at the list above, aside from the clear outlier of Carpool Karaoke, the focus for Apple seems centered around artificial intelligence, augmented reality, enterprise and education.

Artificial intelligence is probably the best example of how different the expectations vs. what Apple delivers might be. For many, artificial intelligence simply boils down to how smart Siri is. However, intelligence in devices is expressed in many different ways. Learning which color emoji is your preference, learning your most likely route at a given time of the day, understanding a reference to a time and a place in an email and setting up an appointment for you are all examples of how “intelligence” can be used to make our experiences better.

Machine learning and fast data processing are key to feeding the brain of any artificial intelligence. Analyzing millions of data points to discover patterns that can help predictability is very important in lowering response times and increase accuracy in our exchanges with an assistant like Siri. Being able to detect users emotions might play a role in that interaction. For the assistant to know if we are getting frustrated or anxious might help with our interaction in the same way it would between two humans.

Augmented reality is an area in which Tim Cook has expressed interest and excitement. Aside from gaming which, of course, is a big part of what iPhones are used for, there are commercial experiences that could benefit from an augmented reality, mixed reality, merged reality or whatever else you want to call this blend of real and digital worlds.

Enterprise is becoming more and more important for Apple and security plays a big role in selling devices to enterprise. iPhones and iPads continue to penetrate organizations, becoming more of a target for hackers. Apple needs to stay ahead of the game. While consumers might not always recognize how important security is, Apple has been very passionate about security for quite some time. As we use our devices, not just to store pictures and contacts info but payment information, health information, smart home connections, we want our devices, as well as our data, to not to be accessible to people with bad intentions.

In education, the battle to displace Chromebooks in K-12 will intensify in 2017, with Microsoft eyeing that segment as a growth opportunity for Windows. For Apple not to have iPads forced to compete on price alone but in adding value to their offering beyond devices is important. Looking at applications and tools to educate as well as manage students is certainly a way to do that.

My Wish List for 2017

Considering the areas I have discussed above, there are a few things I would like to see Apple focusing on in 2017.

A More Conversational Siri – I have mentioned before how my relationship with Siri has been improving over time. This is good and bad at the same time. Good because I appreciate it. Bad because I want more. As my dependence on Siri grows, first in my car and then everywhere through my AirPods, I want Siri to rely less on my iPhone screen and become more conversational. Apple understands that less time looking at my screen does not mean I will think any less of my iPhone but I realize that, for conversational AI, the progress will be slow.

More Tools for Education – Swift Playgrounds was a great example of how Apple could do more to future proof our kids with the kind of skills they will need when they grow up. AI is here to stay and, instead of worrying about the threat of job losses, we should be investing in preparing the next generation with the set of skills required to get a job. While this is a much bigger issue than any single company could solve, I think Apple is in a good place to get kids engaged at an early age, not just with coding and problem-solving skills, but also with fostering creativity, imagination, and innovation.

Better Collaboration Tools – Collaboration is broader than just working with someone else. While I would like to see Apple focus on better collaboration tools for work, it is at home I more urgently need help. If you have kids, you know running a home is as complex as running a company. School and after-school activities, and work all blend together to create a scheduling nightmare only resolved with great collaboration skills.

More than any other company, Apple owns households and I would like to see more apps and tools to help households come together; not just for scheduling but also for monitoring and sharing. Without wanting Apple to give me whatever the digital equivalent of my daughter’s journal key is, I want to make sure my daughter is safe when she is online. Of course, teaching her how to do so is the first thing but there are more steps Apple could take to provide increased safety without hindering the experience. I am hoping machine learning will help with creating a more proactive approach to online safety as whitelisting websites, which is currently what most solutions boil down to, does not make for a rich experience. Sharing not just content but access to our smart homes across devices and family members could also be improved. Helping make our home life easier will pay dividends, especially at a time when the fight to own our home is intensifying among digital assistants. While having an assistant that connects with many smart home devices is valuable, having one that does not let me forget to pick up my kid from karate is priceless.

The Devil Is in The Detail

As you can see, my list is not about iPhone features and sexy new technologies. It is about practical experiences that improve my everyday life, something Apple has done for a long time. Something, however, that is hard to see when you first buy a product and something that is hard to market at point of sale. The challenge for Apple will be to continue to stay focus on delivering better experiences, rather than getting distracted by proving they can innovate by delivering sexy gadgets.

Top 10 Tech Predictions for 2017

Predicting the future is more art than science, yet it’s always an interesting exercise to engage in as a new year comes upon us. So with the close of what was a difficult, though interesting year in the technology business, here’s a look at my predictions for the top 10 tech developments of 2017.

Prediction 1: Device Categories Start to Disappear

One of the key metrics for the relative health of the tech industry has always been the measurement of unit shipments and/or revenues for various categories of hardware-based tech devices. From PCs, tablets and smartphones, through smartwatches, smart TVs and head-mounted displays, there’s been a decades-long obsession with counting the numbers and drawing conclusions from how the results end up. The problem is, the lines between these categories have been getting murkier and more difficult to distinguish for years, making what once seemed like well-defined groupings become increasingly arbitrary.

In 2017, I expect the lines between product categories to become even blurrier. If, for example, vendors build hand-held devices running desktop operating systems that can also snap into or serve as the primary interface for a connected car and/or a smart home system, what would you call that and how would you count it? With increasing options for high-speed wireless connectivity to accessories and other computing devices, combined with OS-independent tech services, bots, and other new types of software interaction models, everything is changing.

Even what first appear as fairly traditional devices are going to start being used and thought of in very different ways. The net result is that the possibility for completely blowing up traditional categorizations will become real in the new year. Because of that, it’s going to be time to start having conversations on redefining how the industry thinks about measuring, sizing, and assessing its health moving forward.

Prediction 2: VR/AR Hardware Surpasses Wearables

Though it’s still early days for head-mounted virtual reality (VR) and augmented reality (AR) products, the interest and excitement about these types of devices is palpable. Yes, the technologies need to improve, prices need to decrease, and the range of software options needs to widen, but people who have had the opportunity to spend some time with a quality system from the likes of HTC, Oculus, or Sony are nearly universally convinced that they’ve witnessed and partaken in the future. From kids playing games to older adults exploring the globe, the range of experiences is growing, and the level of interest is starting to bubble up past enthusiasts into the mainstream.

Wearables, on the other hand, continue to face lackluster demand from most consumers, even after years of mainstream exposure. Sure, there are some bright spots and 2017 is bound to bring some interesting new wearable options, particularly around smart, connected earbuds (or “hearables” as some have dubbed them). Overall, though, the universal appeal for wearables just isn’t there. In fact, it increasingly looks like smartwatches and other widely hyped wearables are already on the decline.

As a result, I expect revenues for virtual reality and augmented reality-based hardware devices (and accessories) will surpass revenues for the wearables market in 2017. While a clear accounting is certainly challenging (see Prediction 1), we can expect about $4 billion worldwide for AR/VR hardware versus $3 billion for wearables. Because of lower prices per unit for fitness-focused wearables, the unit shipments for wearables will still be higher, but from a business perspective, it’s clear that AR/VR will steal the spotlight from wearables in 2017.[pullquote]From a business perspective, it’s clear that AR/VR will steal the spotlight from wearables in 2017.”[/pullquote]

Prediction 3: Mobile App Installs Will Decline as Tech Services Grow

The incredible growth enabler and platform driver that mobile applications have proven to be over most of the last decade makes it hard to imagine a time when they won’t be that relevant, but I believe 2017 will mark the beginning of that unfathomable era. The reasons are many: worldwide smartphone growth has stalled, app stores have become bloated and difficult to navigate, and, most importantly, the general excitement level about mobile applications has dropped to nearly zero. Study after study has shown that the vast majority of apps that get downloaded rarely, if ever, get used, and most people consistently rely on a tiny handful of apps.

Against that depressing backdrop, let’s also not forget that the platform wars are over and lots of people won, which means, really, that nobody won. It’s much more important for companies who previously focused on applications to offer a service that can be used across multiple platforms and multiple devices. Sure, they may still make applications, but those applications are just front-ends and entry points for the real focus of their business: a cloud-based service.

Popular subscription-based tech services such as Netflix and Spotify are certainly both great example and beneficiaries of this kind of move, but I expect to see many different flavors of services grow stronger in 2017. From new types of bot-based software to “invisible” voice-driven interaction models, the types services that we spend a lot of our 2017 computing time on will be much different than in the mobile apps era.

Prediction 4: Autonomous Drive Slows, But Assisted Driving Soars

There’s no question that autonomous driving is going to be a critical trend for tech industry and automotive players in 2017, but as the reality of the technical, regulatory, and standards-based challenges of creating truly autonomous cars becomes more obvious in the new year, there’s also no question that timelines for these kinds of automobiles will be extended in 2017. Already, some of the early predictions for the end of the decade or 2020 have been moved into 2021, and I predict we’ll see several more of these delays in the new year.

This doesn’t mean a lot of companies—both mainstream and startup—won’t be working on getting these cars out sooner. They certainly will, and we should hear an avalanche of new announcements in the autonomous driving field throughout the year from component makers, Tier 1 suppliers, traditional tech companies, auto makers and more. Still, this is very hard stuff (both technically and legally) and technology that potentially places people’s lives at stake is a lot different than what’s required to generate a new gadget. It cannot, nor should it be, released at the same pace that we’ve come to expect from other consumer devices. If, God forbid, we see some additional fatalities in the new year that stem from faulty autonomous driving features, the delays in deployment could get much worse, especially if they happen via a ridesharing service or other situation where ultimate liability isn’t very clear.

In spite of these concerns, however, I am convinced that we will see some critical new advancements in the slightly less sexy, but still incredibly important field of assisted driving technologies. Automatic breaking, car-assisted crash avoidance and other practical assisted driving benefits that can leverage the same kind of hardware and artificial intelligence (AI)-based software that’s being touted for fully autonomous driving will likely have a much more realistic impact in 2017. Truth be told, findings from a TECHnalysis Research study show that most consumers are more interested in these incremental enhancements anyway, so this could (and should) be a case where the current technologies actually match the market’s real needs.

Prediction 5: Smart Home Products Consolidate

Most of the early discussions around the smart home market has been for standalone products, designed to do a specific function and meant to be installed by the homeowner or tenant. The Nest thermostat, August smart lock, and various security camera systems are classic examples of this. Individually, many of these products work just fine, but as interested consumers start to piece together different elements into a more complete smart home system, problems quickly become apparent. The bewildering array of different technical standards, platforms, connectivity requirements and more often turn what should be a fun, productive experience into a nightmare. Unfortunately, the issue shows few signs of getting better for most people (though Prediction 6 offers one potential solution.)

Despite these concerns, there is growing interest in several areas related to smart homes including distributed audio systems (a la Sonos), WiFi extenders and other mesh networking products, and smart speakers, such as Amazon’s Echo. Again, connecting all these products can be an issue, but so are more basic concerns such as physical space, additional power adapters/outlets, and all the other aspects of owning lots of individual devices.

Because of these issues, I predict we’ll start to see new “converged” versions of these products that combine a lot of functionality in 2017. Imagine a device, for example, that is a high-quality connected audio speaker, WiFi extender and smart speaker all in one. Not only will these ease the setup and reduce the physical requirements of multiple smart home products, they should provide the kind of additional capabilities that the smart home category needs to start appealing to a wider audience.

Another possibility (and something that’s likely to occur simultaneously anyway), is that the DIY market for smart home products stalls out and any potential growth gets shifted over to service providers like AT&T, Comcast, Vivint and others who offer completely integrated smart home systems. Not only do these services now incorporate several of the most popular individual smart home items, they’ve been tested to work together and give consumers a single place to go for support.

Prediction 6: Amazon Echo Becomes De Facto Gateway for Smart Homes

As mentioned in Prediction 5, one of the biggest challenges facing the smart home market is the incredibly confusing set of different standards, platforms, and protocols that need to be dealt with in order to make multiple smart home products work together. Since it’s extremely unlikely that any of these battles will be resolved by companies giving up on their own efforts and working with others (as logical and user-friendly as that would be), the only realistic scenario is if one device becomes a de facto standard.

As luck would have it, the Amazon Echo seems to have earned itself that de facto linchpin role in the modern smart home. Though the Echo and its siblings are expected to see a great deal of competition in 2017, the device’s overall capabilities, in conjunction with the open-ended Skills platform that Amazon created for it, are proving a winning combination. Most importantly, the Echo’s Smart Home Skill API is becoming the center point through which many other smart home devices can work together. In essence, this is turning the Echo into the key gateway device in the home, allowing it to essentially “translate” between devices that might not otherwise be able to easily work together.

While other devices and dedicated gateways have tried to offer these capabilities, the ongoing success and interest in the Echo (and any ensuing variants) will likely make it the critical component in smart homes for 2017.[pullquote]The Amazon Echo’s Skills platform is becoming the center point through which other smart home devices can work together.”[/pullquote]

Prediction 7: Large Scale IoT Projects Slow, But Small Projects Explode

The Internet of Things (IoT) is all the buzz in large businesses today, with lots of companies spending a great deal of time and money to try to cash in on the hot new trend. As a number of companies have started to discover, however, the reality of IoT isn’t nearly as glamorous as the hype. Not only do many IoT projects require bringing together disparate parts of an organization that don’t always like, or trust, each other (notably, IT and operations), but measuring the “success” of these projects can be even harder than the project itself.

On top of that, many IoT projects are seen as a critical part of larger business transformations, a designation that nearly guarantees their failure. Even if they aren’t part of a major transformation, they still face the difficulty of making sense of the enormous amount of data that instrumenting the physical world (a fancy way of saying collecting lots of sensor data) entails. They may generate big data, but that certainly doesn’t always translate to big value. Even though analytics tools are improving, sometimes it’s just the simple findings that make the biggest difference.

For this reason, the potential for IoT amongst small or even tiny businesses is even larger. While data scientists may be required for big projects at big companies, just a little common sense in conjunction with only a few of the right data points can make an enormous difference with these small companies. Given this opportunity, I expect a wide range of simple IoT solutions focused on traditional business like agriculture and small-scale manufacturing to make a big impact in 2017.

Prediction 8: AI-Based Bots Move to the Mainstream

It’s certainly easy to predict that Artificial Intelligence (AI) and Deep Learning will have a major impact on the tech market in 2017, but it’s not necessarily easy to know exactly where the biggest benefits from these technologies will occur. The clear early leaders are applications involving image recognition and processing (often called machine vision), which includes everything from populating names onto photos posted to social media, to assisted and autonomous driving features in connected cars.

Another area of major development is with natural language processing, which is used to analyze audio and recognize and respond to spoken words. Exciting, practical applications of deep learning applied to audio and language include automated, real-time translation services which can allow people who speak different languages to communicate with each other using their own, familiar native tongue.

Natural language processing algorithms are also essential elements for chatbots and other types of automated assistance systems that are bound to get significantly more popular in 2017, particularly in the US (which is a bit behind China in this area). From customer assistance and technical support agents, through more intelligent personal assistants that move with you from device to device, expect to have a lot more interactions with AI-driven bots in 2017.

Prediction 9: Non-Gaming Applications for AR and VR Grow Faster than Gaming

Though much of the early attention in the AR/VR market has rightfully been focused on gaming, one of the main reasons I expect to see a healthy AR/VR hardware environment in the new year is because of the non-gaming applications I believe will be released in 2017. The Google Earth experience for the HTC Vive gave us an early inkling of the possibilities, but it’s clear that educational, training, travel and experiential applications for these devices offer potential for widespread appeal beyond the strong, but still limited, hard-core gaming market.

Development tools for non-gaming AR and VR applications are still in their infancy, so this prediction might take two years to completely play itself out. However, I’m convinced that just as gaming plays a critical but not overwhelming role in the usage of smartphones, PCs and other computing devices, so too will it play an important but not primary role for AR and VR devices. Also, in the near term, the non-gaming portion of AR and VR applications is quite small, so from a growth perspective, it should be relatively easy for these types of both consumer and business-focused applications to grow at a faster pace than gaming apps this year.

Prediction 10: Tech Firms Place More Emphasis on Non-Tech Fields

While many in the tech industry have great trepidation about working under a Trump administration for the next several years, the incoming president’s impact could lead to some surprisingly different means of thinking and focus in the tech industry. Most importantly, if the early chatter about improvements to infrastructure and enhancements to average citizen’s day-to-day lives come to pass, I predict we will see more tech companies making focused efforts on applying their technologies to non-tech fields, including agriculture, fishing, construction, manufacturing, and many more.

While the projects may not be as big, as sexy or as exciting as building the coolest new gadgets, the collective potential benefits could prove to be much greater over time. Whether it’s through simple IoT-based initiatives or other kinds of clever applications of existing or new technologies, the opportunity for the tech industry to help drive the greater good is very real. It’s also something I hope they take seriously. Practical technologies that could improve the crop yields by only a few percent of not just a few of the richest farms, but of all the smallest farms in the US, for example, could have an enormously positive impact on the US economy, as well as the general population’s view of the tech industry.

Some of these types of efforts are already underway with smaller agro tech firms, but I expect more partnerships or endeavors from bigger firms in 2017.

Rapidly Diffusing Technology

With the mass proliferation of smartphones and a rapidly maturing consumer base making their smartphones their primary computing devices, there is a fascinating new dynamic emerging. Our company, Creative Strategies, spent a lot of time in the late 80s modeling adoption cycles of new technologies. Using the market data in hand, along with discussions with academics and people like Geoffrey Moore who published a seminal book on the subject called Crossing the Chasm, models were built to understand the conditions that drove technologies into the mainstream. In those days, decades were used in models to predict how technologies went from the early innovators into the hands of average consumers.

Then, much of the effort was built around adoption cycles for personal computers that fit on desks and, eventually, ones that fit on your lap. It took decades just to get to the point where it was common for a single household to have a personal computer in their house. Now, around 3 billion humans across the globe have a computer that fits in their hands and goes with them everywhere. We are in uncharted territory and at a unique point in technology history with this many people owning such a powerful pocket computer, continually connected to the internet and other people, with instant access to buying and selling, and a wealth of information and data accessible at all times. Put all this together and we are watching technology diffuse and become adopted into the mass mainstream at unprecedented rates. This is happening with both hardware and software.

I first started thinking about this trend several years after Apple released the iPad. As that product launched, we were still using traditional models to predict and anticipate adoption cycles of new technology. We shortened the time span some, due to more mature market dynamics, but we did not expect the iPad to become the fastest adopted new technology product of all time and forecasts were well under what iPads sales rates were. So, we focused our research and analysis on why this happened and what we can learn. This was the first time we needed to step back and honestly look at how much has changed in the market to rethink how technology will diffuse in the modern age.

The iPad was a useful case study in adoption cycles for more than just how quickly it went mainstream but also how quickly it seemed to hit its addressable market. Once the iPad’s S-curve was on a steep incline, forecasters began to modify their underestimated forecasts but then began to overestimate. Some people were predicting a potential market size north of one billion tablets. Sales started to slow and it quickly became a replacement market with a total active installed base of around 350-400 million units, well short of the billion plus forecasts and well short of the PC installed base of ~1.5 billion and smartphone installed base of ~3 billion. While the iPad went mainstream faster than any tech hardware product in history, it also hit its max total addressable market extremely quickly. This is the new dynamic I think we are to expect as we move into a cycle of rapidly diffusing technology.

We can make some of the same observations about the wearable market. Specifically, things like smart watches and fitness wearables. This market shot off like a rocket but then slowed very quickly. This dynamic made predicting its exact market size difficult. The slowing year over year growth of the category came quickly and, while the market size for wearables may still be larger than it appears it is today, it also diffused quickly in certain developed markets.

We also see this trend in software. Perhaps one of recent example was the fascinating phenomena of Pokemon Go. There has never been software which went from zero to 500 million as fast as Pokemon Go. While not all of those 500 million people are still using the app, we saw a fascinating phenomenon of diffusion of software in the form of hoards of people walking around public spaces hunting for digital creatures hiding in the physical world.

The groundwork has been laid for technology, both hardware and software, to diffuse rapidly in short periods of time. Which again, makes it very difficult to predict their market size. A product, app, or service may appear to be addressing a larger market than it is in its early stages. This means metrics around hardware sales, app downloads, service subscriptions, etc., may be extremely misleading. The problem is, we have no idea the degree to which they are misleading or not.

Twitter, Fitbit, and GoPro are just recent examples of things that grew quick but also hit their max market opportunity just as quickly. As these companies went public, it was on the basis of a much larger market opportunity than it appears is the case. It’s possible Snapchat may fall into this category, but we don’t know, which is the new challenge of our modern era.

One last point. There are clearly things which will not diffuse as quickly because they are truly new and groundbreaking types of technologies (AR and VR for example) and may spread more slowly since consumers have less familiarity or context with the new technology. In those cases, I believe, we can still assume some type of longer than usual adoption metrics. I’m also not saying the mentioned companies or categories can not still grow their market opportunity with innovations. Only that the “easy” growth was over and over quicker than anticipated.

All in all, I’m convinced those of us who study these markets are in for new challenges in our approach as we try to size market potential for consumer technology. It means we need to address research with new methodologies, ask different sets of questions, understand deeper nuances of each consumer segment and, overall, be willing to abandon old practices and assumptions to create new ones for the modern era.

Living the Daydream: Google’s New VR Platform takes Shape

I’ve been testing hardware that runs Google’s virtual reality (VR) platform called Daydream. It’s a little rough around the edges and is clearly pushing its brand new hardware to its limits. But overall, it’s a pretty good experience and it shows the potential for mainstream VR going forward. That said, my time inside Daydream further cemented my view that VR today is somewhat exhausting, rather isolating, and is best in small doses.

Soft and Cozy Face Hugger
I tested Daydream using Google’s Pixel XL smartphone and Daydream viewer. When Google announced Daydream at its I/O conference, I thought its viewer would have more technology onboard than the Cardboard-based viewers the company had been pushing as a low-cost entry to VR for years. Cardboard viewers are often literally made from cardboard, although there are plenty of nicer ones out there. In reality, however, the Daydream viewer itself has very little silicon inside its fabric-wrapped shell, aside from an NFC chip. This chip alerts the Pixel you’re about to strap it into the viewer, launching the Daydream interface.

There are some key differences between Cardboard and Daydream. Chief among them is that to run Daydream, an Android phone must have the right combination of processing power and sensors. I’ve never been able to use a Cardboard viewer, even a good one, for more than 5-10 mins without feeling nauseous. But I didn’t experience that this time which I attribute to these hardware requirements. Second, the Daydream viewer includes a Bluetooth controller that fundamentally changes how you interact with content inside the viewer. The result is an experience dramatically more intuitive and immersive than any I’ve experienced within a Cardboard viewer.

After you walk through a tutorial on using the remote (which includes two buttons and touchpad area), you can dive right into the content. As you might expect, Google’s own content is front and center with YouTube VR and Google Play store preloaded. I’ve had the phone and viewer for about a week and, in that time, Google seems to have added more content to both. Today, there are about 60 apps available through the store. These include games, streaming apps such as Netflix and Hulu (where you can watch standard videos on a giant screen), as well as VR-specific content aggregators such as LittleStar and NextVR. As you might expect, some of the content is good but much of it is cheesy and gimmicky. There’s no denying that, when you hit upon something cool in VR, it’s a very exhilarating experience.

Is it Hot in Here?
Unfortunately, at least for me, these experiences are still best enjoyed in brief bursts. While Daydream is certainly a better experience than Cardboard, I still find myself limited to a maximum of 20-25 minutes in the viewer. Part of the issue is my eyes and brain just seem to find VR experiences taxing (I’m also not a big fan of 3D movies). Another problem is when the Pixel is working hard it gets hot to the touch. Not warm, but hot, and that heat gets transferred to your face. After a short while, it gets warm enough to be uncomfortable. And when you remove the viewer, you look like a red raccoon.

I’ve yet to stay in VR long enough to have the phone actually overheat. But, on numerous occasions, I’ve had it drop out of VR mode into standard phone mode. It’s not clear if this is a glitch, user error, or a bit of both. But it does necessitate taking off the headset, removing the phone, and starting the process over. Like I said, a little rough around the edges.

There are other basic interface challenges Google and its partners still need to address. For example, it’s possible to log into your Netflix and Hulu accounts through the VR interface but, if you need to look up your passwords in a manager app, you have to drop out of VR to do so (at least until those apps make their way to VR). Of course, should a friend or family member try to talk to you while you’re inside Daydream with headphones on, they’re probably going to have to tap you on the shoulder. This can lead to bigger jump scares than anything that happens in VR.

At the end of the day, that’s my biggest issue with VR: It’s a fairly lonely place. I know Facebook is planning to drive a social component with Oculus and I would expect Google and others to try to do so at some point as well. Maybe these future social elements will change my thinking. For now, VR requires a level of isolation I find uncomfortable. Which means, at least for the near term, all of my virtual experiences will need to clock in at 30 minutes or less.

Thinking about Apple’s Next New Product Category

Many tech news publications do “year in review” and preview pieces at this time of year. One of the questions I always get asked is what new hardware products Apple might launch in the coming year. Some things – notably the iPhone – are so predictable in their annual schedule at this point they’re barely worth commenting on, while others like the iPad and Apple Watch seem to be settling into something of a pattern too. The most interesting question is often what completely new products Apple might release. With that in mind, here are some thoughts about the new products I think we might see from Apple over not just the next year but the next couple of years.

Additional wearables

I love my Apple Watch – I’ve used one version or another all day, every day, since it first came out. It’s made a meaningful difference in my ability to manage incoming notifications, my health, and my general information consumption. Over the past week, I’ve also been using AirPods a lot and those too are, for the most part, great little devices. However, there are some limitations to both of these products which make me think we might see additional wearables from Apple.

One of the biggest limitations of the Apple Watch now that it’s usable in the pool and has GPS functionality, it’s not appropriate to be worn during certain sporting activities. If you play basketball, soccer, football, lacrosse, or any other contact sport, wearing a watch (of any kind) would be either unwise or dangerous for the watch and player safety. If you get a lot of your exercise through these sports, the calories you burn and time spent exercising can’t be captured by the Watch and, therefore, simply go unrecognized by the Activity app. In the past, I’ve used Fitbit devices which I could slip into a pocket while playing and would track such activity for me. So one obvious device for Apple to launch is a companion of sorts to the Watch which would clip onto clothing or slide into a pocket in order to track such activity, syncing with the Apple Watch when you put it back on.

Others might prefer to have just one of these devices instead of a Watch, if they have never worn a watch of any kind – whether or not someone has traditionally worn a watch seems to be one of the biggest predictors of how they respond to the Apple Watch, in my experience. Some other device worn on the body to track activity and potentially buzz for notifications might be an interesting alternative. If it also came with audio controls as a companion to AirPods, that would make it particularly interesting – I’m finding that using Siri to control playback isn’t always the best fit.

Siri speakers

In my experience, the biggest advantage home speakers like Amazon’s Echo or Google’s Home have over Siri on any of the devices where it’s available isn’t functionality of the assistant itself but the size and configuration of the devices on which it operates. Those devices were, without exception, designed first and foremost with something other than microphone performance in mind. They’re mostly intended to be as small as possible, with smooth lines, large displays, and other features which hampers the ability to deliver high-performing far-field voice recognition. As such, if Apple really wants to improve Siri performance, especially in the home, the solution probably isn’t in software but in hardware and that’s where a Siri speaker comes in.

The next question is exactly what such a speaker would involve. Echo and Home are both very similar speakers, but they’re standalone – other than the mobile app used to set them up, they connect to WiFi in the home and operate independently. Google Home does work with Google Cast but, other than that, it is essentially disconnected from any other device in the home. It feels like an Apple home speaker would be more integrated into the ecosystem of devices in the home, becoming one of several outputs for audio, for example, and potentially working together with the Apple TV and/or other devices for whole-home audio. One can also imagine using Siri on phones to trigger music playing on the speaker, for example. Or even using the Siri speaker to trigger playing a TV show on the Apple TV for a child in the other room. I can also imagine using several of these speakers independently to recreate a sort of Sonos whole-home audio system.

HomeKit hardware

Another interesting category is first-party HomeKit hardware. To be honest, I think this category was more likely a year ago, when HomeKit was still struggling to get off the ground, versus today’s much healthier ecosystem. But I still think it’s possible Apple might eventually introduce its own hardware to work as part of the HomeKit system, especially in categories where design and ease of use on third party devices is poor or in areas where the devices would make a meaningful contribution to other aspects of the Apple ecosystem. For example, sensors placed around the home could help trigger lighting and other home automation features through HomeKit.

Having said all this, I continue to believe the smart home space is essentially stuck at the early adopter phase when it comes to these one-off purchases as opposed to managed services. With that in mind, it’s harder to see how Apple could launch products in this category and have a really significant impact on the market unless it also provides some kind of installation and management support. That would obviously be a departure for Apple, whose premise for much of its hardware has always been it just works. But smart home gear is inherently different in nature from standalone hardware products because it needs to be integrated into the home. That means dealing with wiring and other potentially dangerous and intimidating challenges that don’t apply when it comes to phones or laptops.

Augmented reality

Tim Cook has made increasingly enthusiastic remarks about augmented reality over the last couple of years and it seems likely Apple has some kind of play in AR up its sleeve. However, the biggest question is whether it sees the iPhone or some other device as the center of these experiences. We’ve already seen some basic AR features as part of iPhone apps, from an early version of Yelp which superimposed locations of restaurants on a live view of the environment to the more sophisticated merging of the real and virtual worlds in the Pokemon Go app. With dual cameras and the ability to sense depth, the iPhone is certainly capable of more sophisticated augmented reality applications than ever before.

But there are still some categories of augmented reality where a head-mounted device of some kind can provide more advanced functionality and, critically, free your hands to interact with the environment. This could certainly be used for gaming but also be used for educational and other scenarios, too. Apple is reportedly working on at least some head-worn AR devices, though we don’t know yet whether any of these will make it to market. However, it feels like 2017 could well be the year where we see the first mass-market AR devices launch, testing the market for such devices and potentially laying the groundwork for an Apple entry later.

Timing

If I had to guess, I’d say the Siri speaker and additional wearables are the most likely entrants in 2017, while AR feels at least a year or two away. I’m still not 100% convinced Apple should be in the first party home automation hardware business at all. And of course, I’ve said nothing about cars, which seem less likely as a future hardware category today than they did this time last year and, at any rate, would be multiple years away. It’s entirely possible we won’t see a major new hardware product category from Apple at all in 2017 but I suspect we’ll see at least one at some point.

The Top Trends for CES 2017

Since the Tech.pinion’s team is taking next week off for the holidays and won’t be back in full swing until January 3rd, 2017, just a few days before the 2017 edition of CES, I wanted to share what I see as important themes or trends at the upcoming Consumer Electronics Show. For those going to the show, you can factor this into your planning and, for those not attending, this is what you should be looking for in the way of news coming out of the show.

CES 2017 is celebrating a major milestone this year. It’s the show’s 50th anniversary. Actually, there have been well over 70 CES shows since, in the 1980s and 1990s, they also had a summer show in Chicago. For me, this is my 60th CES show I will attend.

Each year the show grows. Interestingly, when Comdex became the poster tech show in the late 1980s and most of the 1990s, CES actually struggled a bit and, during that, time show attendance waned. But once Comdex bit the dust in early 2000s, CES picked up steam and has continued to grow. CES officials expect at least 170,000 at this year’s show with 50,000 coming from abroad. Those going to the show can explore a show floor of over one million square feet and most who attend the show can plan to walk at least 15 miles during it. Wear good shoes if you are going.

Here are what I see as the 5 major themes for this year’s version of CES.

Smart Cars and Autonomous Vehicles

The auto industry has been represented at CES for decades but more in the form of add-on sound systems, in-car entertainment systems and, more recently, navigational products. But CES has now become the showcase for many of the auto companies to show off versions of their smart cars as well as more recently, their prowess in self-driving cars.
We should also see a lot of products that can be added to a car to make them smarter, such as Navdy’s add-on that delivers a heads up display for navigation and connects to smartphones to make existing cars more intelligent. We will also see some innovative designs from Corning in which the entire dash is made of glass and other demos in their booth that show how “smart” glass will eventually change the way we interact with the cars of the future. One keynote related to this will be by Carlos Ghosn, CEO of Nissan where he is expected to show off their autonomous vehicle and Farady is expected to show its almost ready electric car that wants to go after Tesla.

VR, AR and Mixed Reality will be big hits at CES

Two years ago, Occulus Rift introduced their VR headset and became one of the biggest hits of CES 2014. Since then, Oculous has been bought by Facebook, HTC has the Vibe, Sony has the Playstation VR and Samsung has their VR goggles connected to Galaxy smartphones. All have brought VR to the attention of businesses and consumers around the world. However, VR so far has focused on games and, when used in business, it is targeted for vertical apps that bring VR to things like real estate listings, travel and many other visually driven business disciplines.

Because it needs powerful goggles or glasses to deliver a serious VR experience and these are still pricey, VR is still a few years out before it reaches mainstream consumers. But VR will be big at CES this year as will many apps and devices focused on AR and Mixed Reality where VR and AR apps overlap the visual experience. We will see many new products in these categories at the show and we should get a hint of how the market for these products will develop in 2017.

4K TV’s are mainstream, but 8K is on the horizon

4K TV or HDR TV, as they are officially called, were a hot topic at the last three CES shows. This year, that will be true again. As prices have come down, 4K TVs have become more affordable and anyone upgrading a TV should move to 4K even though content supporting 4K has been slow to roll out. But 4K programs will be more plentiful in 2017 and buying a 4K is future proofing your TV for the near future. The big question will be should you buy a LCD, OLED or Quantum Dot display models.

Sony argues LCD has a lot of life in it yet while LG wants to move everyone over to OLED. But Samsung says Quantum Dot is the future of TV screens. Cost will be a big factor in this decision. LCDs are made in mass quantities and are getting better in quality and resolution and, at the moment, are less costly than OLED TVs — OLED screens are still very expensive. Quantum Dot TVs are also pricier than LCD but all three are highly competitive and the value of each is in the eye of the beholder.

CES will also have at least a couple of TV vendors showing off the next version of HDR 8K TVs. The goal is to start moving people to 8K by the 2020 Olympics which will hopefully be shot in 8K and, by early 2021-2022, start moving consumers over to 8K in earnest.

IoT will be everywhere

IoT will be represented in just about every product shown in one form or another. Connected devices and IoT-related products will be in everything from new wearables and health products to appliances and vehicles. One could almost call CES the “IoT” show given that just about every product shown will have some form of connectivity. The show recognizes this and has pavilions dedicated to IoT in health, fitness, communications and automobiles. Expect connected devices to be a huge theme again this year.

Personal Robots, Personal Transportation devices and Drones

Given the number of invites I have received about personal robots, I suspect this will be an interesting new category being pushed at CES.

Some of these robots are task oriented such as robot vacuums and robot coffeemakers but some are actually small robots that follow you around and become some type of personal assistant. Also hot will be personal transportation devices like hoverboards and different variations on the idea of giving people new forms of personal electronic transportation options. And we should see dozens of new drones introduced that target both business and consumers.

While I enjoy walking the full show at the LVCC, the most interesting area of the show for me is at the Sands and part of the Venetian that host what is called Eureka Park. This is where many of the start-ups are as well as the special sections of booths sponsored by specific countries such as France, China, Spain, Italy, and others. Ever year, I find some gem from one of these vendors. It is one of the richest areas of CES to mine for new products and product ideas.

For many, the show has just become too big and crowded and they choose to not attend. I respect them for that. However, I still see the show as very valuable to check out new products, see old friends, meet new ones, talk with clients, and of course, network. Thankfully, my health is holding up and walking 15-20 miles during the show is actually good for me.

I expect CES to also have a surprise or two such as when Occulus Rift was the hit of CES 2014. Not sure what that product will be but, if CES is true to form, we will see some new hot product come out of it that could be very interesting in the New Year.

Cars as Client Devices

It’s no secret that an enormous amount of advanced tech hardware is making its way into today’s automobiles. Whether it’s for assisted or autonomous driving features, advanced infotainment systems or simple safety enhancements, modern cars are getting a big injection of cool new hardware.

Software, on the other hand, has been a bit more muted. Oh sure, there’s the user interface (UI) on the ever-expanding main entertainment and navigation display, but the truth is there are a lot more software efforts going on beneath the hood (literally in this case). In fact, at the upcoming CES show in Las Vegas, I expect to see several announcements related to car-based software and services that turn your automobile into a nearly full-fledged client computing device.

Traditionally, auto-based services were called telematics, but early versions were limited to basic functions such as what’s been found in GM’s OnStar: a separate telephony service for roadside assistance and beaming back car diagnostic data to the auto company’s headquarters.

Today, there’s an enormous range of different software built into cars, from middleware, RTOS (real-time operating systems—such as Blackberry’s QNX or Intel’s Wind River), to artificial intelligence-based inference engines, and beyond.

In fact, there can be over 10 million lines of code in a modern luxury car, working across all the car’s various computing elements, from 150+ ECUs (engine control units—each of which typically runs a particular auto subsystem, such as heating and air conditioning, in addition to portions of the engine, etc.), to more traditional CPUs and GPUs from the likes of nVidia, Intel, Qualcomm and others.

While much of that software will never be seen or directly interacted with by individuals—it’s part of a car’s overall controls—more and more of it is starting to surface through the car’s driver and passenger-focused displays. Many assisted or autonomous driving systems, for example, do provide some visual cues or messages about what they’re doing, though most of their work happens in the background automatically.

In the case of entertainment interfaces, of course, we’ve started to see the implementation of Apple’s CarPlay and Google’s Android Auto. In neither case, however, do Apple and Google provide the entire user interface for the vehicle for two key reasons. First, carmakers are very reluctant to give up the entire user experience to an outside brand. They want and need to “own” the relationship with their customer by making sure it’s a GM experience or a Ford experience or a Porsche experience, etc. Second, neither Apple nor Google have access to the vast majority of software running on the automobile because of the hardened walls between subsystems. As a result, they can only interact with a tiny fraction of the software running in a vehicle. (Waymo, the recent autonomous car spinout from Google, and Apple’s rumored Titan car project, are likely working on many pieces of this more invisible software, among other things, FYI.)

In the near term, however, the next set of auto-related software developments are likely to be extensions and additions to popular software and services that get more fully integrated into cars and turn them into first-class client devices. Now that PC and mobile phone-like hardware is being embedded into cars, along with cellular connectivity and larger, high-resolution displays, it just makes sense to do so.[pullquote]The next set of auto-related software developments are likely to be extensions and additions to popular software and services that get more fully integrated into cars and turn them into first-class client devices.[/pullquote]

At a basic level, think about entertainment services like Spotify, Netflix and others coming natively to cars, or imagine tighter integration with good ‘ol PIM (Personal Information Management) software, such as contacts, calendars, etc. Incorporating things like meeting updates, conference call dial-in information, and other elements directly into your car instead of via a smartphone app could prove to be very beneficial. Not only would it improve the convenience and integration of using them in your car, it could have a dramatically positive impact on safety. In addition, if texting and other forms of messaging are directly integrated into car displays, for example (and more importantly, can therefore be automatically disabled based on the car’s speed), that could do more to save lives than any autonomous driving system.

Note that because many of these capabilities will be delivered as services, the car doesn’t need to be running a full mobile OS and the apps won’t have be to delivered in a native OS format. An HTML5-capable browser is likely all that’s necessary, making it easier for car vendors and Tier 1 OEMs to incorporate these software features into their designs, as well as increasing the useful lifetime of the car’s technology.

Looking forward, it’s clear that we’re still at the very early stages of bringing significantly more intelligence and capabilities into our cars. Progress is being made, but when you start thinking more deeply about the potential, the full promise of smart cars is yet to come.

Apple AirPods: More than just Headphones

Prior to their going on sale, we had quite a bit of information about the AirPods and what they were capable of doing. We knew they would pair easily and that there were sensors built in that knew when you are wearing them and when you weren’t. But some things just have to be experienced to appreciate their magic and the AirPods are one of them.

First, you will never see a more seamless pairing experience than the first time you pair the AirPods. Open the case, press Connect, and they are instantly paired with all my iOS devices, including iPad and Apple Watch. As soon as you put one AirPod in your ear, subtle sound lets you know they are on and ready to be used.

Perhaps my favorite feature is when you take one AirPod out, the music automatically pauses. Put it back in and it resumes flawlessly. This is useful when someone is talking to you and you need an ear free to listen and respond. I have some context with this experience, having used the Plantronics BackBeats Pro 2 which offer a similar smart sensor that pauses your music when you take off the headphones. For whatever reason, I found taking one AirPod out much more convenient than lifting the entire headset off my head. Perhaps just preference, perhaps not. In either case, the seamlessness of this experience is fantastic.

Whenever you need to know the battery level of the AirPods or the charging case, simply open the case next to your iPhone and this screen instantly pops up. Apple is using some sort of close proximity solution because, if you move the case even one foot away and open it, nothing happens on the phone.

img_0819

I’ve been using Bluetooth headphones for years, so the awesomeness that is wireless headphones was not new to me. But, these were the first I’d used which are independently wireless — not connected to anything. With sports Bluetooth headphones you notice and feel the wire on the back of your neck as you move. Similarly, with over the hear wireless headphones like the Bose QuietComfort or Beats Wireless or similar ones, you feel the band that goes over the top of your head. The point is, they don’t disappear. I was surprised and delighted by how comfortable the AirPods are in my ears and how easily you forget they are there. Interestingly, I feel the same way about my Apple Watch. It seems the theme with both of Apple’s wearable computers (and yes I consider the AirPods to be wearable computers) is comfort to the degree of making them feel as though they disappear. This may be ear-shape dependent so my statement may not be true of everyone but it is with me.

Many others who have tried them have commented on how well they stay in your ears. I found this to be true. I used them while doing light exercises like yoga and even some living room cardio (via the Apple TV app Zova) and they stayed in perfectly. The lack of a cable makes a difference in helping them stay in your ears. I took it one step further and played a singles tennis match with my playing partner. I’m sure Apple wouldn’t recommend them for an intense run or similar activity, but I figured I’d try it. I’ve tried every form of sport Bluetooth headphones and, because of the wire behind my neck and some of the violent movements of tennis, they all fall out regularly. Here again, not having the wires attached made all the difference in the world. Maybe the AirPod shape fits my ears like a glove but they didn’t fall out one time during my match. In case it matters, I’m a fairly high level (by USTA ranking) tennis player, so I go at it pretty hard.

When I was tweeting my thoughts about AirPods, I got resistance from some saying, “Aren’t they just wireless headphones?” Apple’s AirPods are just wireless headphones about as much as the Apple Watch is “just” a watch and iPhone is “just” a phone. Nothing makes this more apparent than the Siri experience.

Siri in Your Ear
It is remarkable how much better Apple’s Siri experience is with AirPods. In part because the microphones are much closer to your mouth and, therefore, Siri can more clearly hear and understand you. I’m not sure how many people realize how many Siri failures have to do the distance you are from your iPhone or iPad, as well as ambient background noise and the device’s ability to clearly hear you. Thanks to the beam forming mics and some bone conduction technology, Siri with the AirPods is about as accurate a Siri experience I’ve had. In fact, in the five days I’ve been using the AirPods extensively, I have yet to have Siri not understand my request. Going further, the noise canceling built into the AirPods is impressive as well. I’ve intentionally created noisy environments to test the AirPods and Siri to see how it handles loud situations. Perhaps the most intense was when I turned my home theater system to nearly its peak volume, blasted Metallica and activated Siri. Remarkably, it caught every word and processed my request.

Furthermore, having Siri right in your ear and available with just a double tap on the side of either AirPod profoundly changes the experience. In many ways, the AirPods deliver on the voice-first interface in the ways I’ve been impressed with Amazon’s Alexa.

There is something to not having to look at a screen to interact with a computer, especially in a totally hands-free fashion. The AirPods bring about an experience which feels like Siri has been set free from the iPhone. This was Something that enhanced the experience but also pointed out some holes I hope Apple addresses.

Voice-First vs. Voice-Only Interfaces
There is, however, an important distinction to be made where I believe the Amazon Echo shows us a bit more of the voice-only interface and where I’d like to see Apple take Siri when it is embedded in devices without a screen, like the AirPods. You very quickly realize, the more you use Siri with the AirPods, how much the experience today assumes you have a screen in front of you. For example, if I use the AirPods to activate Siri and say, “What’s the latest news?” Siri will fetch the news then say, “Here is some news — take a look.” The experience assumes I want to use my screen (or it at least assumes I have a screen near me to look at) to read the news. Whereas, the Amazon Echo and Google Home just start reading the latest news headlines and tidbits. Similarly, when I activate Siri on the AirPods and say, “Play Christmas music”, the query processes and then plays. Where with the Echo, the same request yields Alexa to say, “OK, playing Christmas music from top 50 Christmas songs.” When you aren’t looking at a screen, the feedback is important. If I was to ask that same request while I was looking at my iPhone, you realize, as Siri processes the request, it says, “OK” on the screen but not in my ear. In voice-only interfaces, we need and want feedback that the request is happening or has been acknowledged.

Again, having Siri in your ear and the ability to have a relatively hands-free and screen-free experience broke down when you asked Siri something which required unlocking your phone. For example, one of the most common Siri actions of mine is to use Siri to locate a family member. Particularly my daughter who takes a bus home from school that has a variable drop off time due to traffic or student tardiness. Nearly every day I ask Siri to locate my daughter. But, when I do so via the AirPods and my phone has been off long enough to lock, it says I need to unlock my iPhone first. I hit this wall due to Apple’s security protocols, which I appreciate greatly. I wonder if, in the future, we can have a biosensor in the AirPods which authenticates with me and thus gives me security clearance to process a secure request like reading email, checking on a family member or other sensitive requests, without having to unlock the phone first.

There were cases where Siri assumes I can look at my iPhone to deliver the request. There are certainly plenty of queries where Siri, in a voice-only experience, works — when you ask Siri to read your new emails, or set timers, appointments, ask what time a sports game is, etc., but the sweet spot here will be when you can thoroughly use Siri and not need any screen for the full experience. I’m confident Apple will increasingly go in this direction.

Creating the Siri experience to be more than just voice-first but voice-only will be an important exercise. I strongly believe that, when voice exists on a computer with a screen, it will never be the primary interaction input with that screen. Take the screen away and things start to get really interesting. This is when new behaviors and new interactions with computers take place and it’s what happens when you start to integrate the Amazon Echo or Google Home into your life as both are voice-first experiences.

Looking Ahead
There is a great deal to like about the AirPods. Those who buy them and use them will be pleasantly surprised and delighted by their performance as wireless headphones and impressed with the upside of Siri in your ear. I consider the AirPods an important new product in Apple’s lineup and in the same category as the Apple Watch regarding importance for the future. Here is a significant observation of both the Apple Watch and the AirPods worth pointing out. Apple has a tendency to push engineering limits at times to learn or perfect a technique they believe is important for the future or to learn from it in order to integrate into other products. While iPads and iPhones are getting larger, the Apple Watch and AirPods are pushing the limits of miniaturization. Something that is key when we start thinking about future wearables where companies will pack tremendous amounts of technology into extremely small objects. The exercise of packing sensors, microprocessors, batteries, and more into extremely small objects and manufacturing them at scale is an incredibly important skill set to develop for the future. Both the Apple Watch and AirPods are key engineering milestones to build on for where I believe Apple is headed in the future.

Podcast: Autonomous Cars, Uber, Waymo, Facebook Fake News

In this week’s Tech.pinions podcast Jan Dawson and Bob O’Donnell chat about developments in autonomous and connected cars, with a specific focus on Uber’s SF experiments, Google spinning out Waymo, and previewing car-related news at CES. In addition, they cover Facebook’s efforts to attack the fake news problem.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Memo to President-Elect Trump: Networks are a Critical Part of Infrastructure

President-elect Trump says he wants to spend upwards of one trillion dollars “rebuilding our nation’s infrastructure”. I urge him to consider that mobile and broadband networks, along with connectivity, are just as important for business and national competitiveness in the 21st century as improving our roads, bridges, airports, and the energy grid.

So far, the incoming Trump administration has signaled a key telecom priority will be reversing what it believes to be the Obama administration’s bureaucratic overreach, with net neutrality being the poster child. It’s consistent with Trump’s views on the Affordable Care Act, Dodd-Frank, and other issues.

But Mr. Trump, and the soon-to-be Republican majority FCC, should not ignore the significant progress made in improving the nation’s broadband infrastructure during Obama’s presidency and Tom Wheeler’s FCC tenure. Over the past eight years, we have seen:

The launch of four national 4G (LTE) networks. Although our roads and airports might be “third world”, our wireless networks are among the best in the world and mobile data usage is among the highest.
The FCC setting the stage for continuing this leadership with 5G, under the Spectrum Frontiers Act announced last July.
A series of successful spectrum options, resulting in an approximately 50% increase in the amount of spectrum held by the leading wireless operators.
Several broadband-related initiatives which have led to an increase in household broadband penetration from about 60% when Obama took office to more than 80% today.
A significant increase in average broadband speeds, from less than 10 MB download at a typical household in 2009 to more than 50 MB today. Among other things, this has enabled successful video streaming services such as Netflix and the ability to consider new, over-the-top (OTT) options for television.
The 3.5 GHz spectrum sharing initiative which, if successful, would be a first and a model that would surely be adopted by other countries.
Robust capital expenditures by broadband and mobile operators during Obama’s term. North America and China are the world’s capex ‘bright spots’. Europe is stagnating and Latin/South America is challenged.

I would like to see the Trump administration build on these trends. Rather than spending a lot of time and energy on dismantling, reversing, and score settling, Trump should send the message that re-establishing our nation’s physical infrastructure as the ‘envy of the world’ includes having the world’s best communications infrastructure.

There are three priorities, in my view. First, we need more, better, and cheaper broadband. Although the U.S. leads in many metrics related to mobile networks, our fixed broadband networks are, on the global stage, decidedly middle-of-the-pack. Household penetration is stalling – getting that last 15-20% is not going to be easy, both physically and economically. Investments in fiber to the home initiatives have stalled, with AT&T and Altice being among the lone bright spots. And we need more competition in broadband. More than 50% of households have a choice of only one decent broadband provider. I’ve always been surprised at the FCC/DOJ’s position on wireless operator consolidation, given the near monopoly structure that exists in broadband. The lack of competition also makes broadband service comparatively expensive.

Over the next few years, we are going to need a significant increase in speeds and broadband capacity to accommodate the 4K TV, AR/VR, & 500 GB+ per month consuming household, circa 2020. I am not sure the current telco/wireless incumbents can fund all this themselves. What can Trump do? He can take a serious look at finding a way for some of the ridiculously wealthy internet players (Netflix, Apple, Amazon, Google, Facebook) to fund some of this. He can also earmark more of spectrum proceeds into some level of subsidy for network construction, rather than see tens of billions of dollars melting into the miasma of the Dept. of Treasury.

Second, we need to continue the momentum in mobile. The Wheeler FCC achieved a lot in making more spectrum available, developing an innovative spectrum sharing scheme at 3.5 GHz, and laying the groundwork for 5G. There is still a lot of work to be done here and these are complex issues.

I would not be opposed to consolidation in wireless, particularly if it appears we can’t have four healthy, profitable wireless operators. If Sprint does not have the resources to truly leverage its 2.5 GHz assets, then let’s get it in the hands of an entity that can. And let’s get DISH to put its treasure trove of spectrum, which the company has been amassing and sitting on for years, to work.

On 5G, the Trump administration has a great opportunity to make this a category of infrastructure where the U.S. leads the world. A great deal of innovation is already coming from U.S.-based companies. Trials will begin in earnest in 2017. There is still a lot of work involved to ensure the millimeter wave bands are, in fact, viable for commercial wireless services. The public sector can also play a role, developing a structure that will make it easier to deploy the vast number of sites required for 5G in cities.

5G will also push the question of what networks and the industry structure will look like in the early 2020s. Will we still need separate fixed and mobile network subscriptions? How can public infrastructure be leveraged to facilitate the deployment of the millions of Wi-Fi/5G-equipped small cells and the concomitant backhaul capacity? Can fixed wireless, with small cells 100 or 200 meters from the home, be a viable alternative to fixed broadband?

Third, I believe IoT can play a significant role in infrastructure improvements envisioned by Trump. We are at a tipping point with IoT: module prices have come down, purpose-built IoT networks are being built in both the licensed and unlicensed bands (see my December 2 column, The Emergence of Purpose-Build IoT Networks), and both enterprise and industry are investing in the sector. So IoT is starting to happen.

All of these devices and sensors can play an important role in key verticals such as transportation, smart grid, smart cities, and so on. The new administration can encourage the use of sensors to make our infrastructure smarter, cheaper, more efficient, and data-driven. Major tech companies, from Cisco to IBM, are aligning to play a role here. This will also require better coordination between various branches of the public sector, including municipalities.

On a final note, it has been interesting to see that Trump has filled several senior positions with executives from the private sector – mainly the financial and energy industries. Why not bring in some of the tech industry’s all-stars to help “Make America Great Again”? After all, many of these people, and the companies they founded or led, have been among the “Greatest Things About America” over these last several years.

Tech Should be Helping Families

Technology has done an amazing job of helping empower us as individuals – we can do more, and more quickly and easily, because of technology. But technology can also be isolating, separating us from each other as we retreat into our own virtual worlds. When technology does provide connections between people, it’s often between friends rather than families. There have been few apps, devices, or other technologies designed to really help families in a meaningful way. I’d love to see that change.

Technology can be isolating

Most technology is aimed at individuals, each in their own bubbles. Algorithms learn about us as individual human beings, not as groups or families. That’s fine when it comes to much of the technology we consume because we use it for our own personal interests and tasks. But it can also mean we’re each retreating into our own virtual worlds, carefully customized and curated for each of us. Even when we’re physically together as families, we’re often absorbed in our own devices and activities, separate mentally and emotionally.

The same technology that has so much power to enrich our lives individually, then, often disempowers families seeking to build connections and relationships and to form bonds. Technology becomes a barrier rather than an enabler of those relationships and many a parent has struggled to find ways to overcome them. To the extent companies have sought to provide technology for families, they’ve often focused on enabling parents to abdicate responsibility through time limits, parental controls, and the like, rather than giving them tools they can actively use or connecting them to their children.

There are exceptions

This is not to say technology has done nothing for families in recent years – I’ve actually seen some real examples of technology being put to good use in helping families. Here are just three:

  • Netflix’s user profiles – Netflix introduced user profiles a few years back and they’ve been very useful in our family for separating viewing by my wife and myself from the shows our kids watch. That has two benefits – the parents and kids each get recommendations based on their viewing, not each other’s, and the kids aren’t unexpectedly presented with adult shows. Our children’s’ shared profile is explicitly a Kids profile and is designed differently and populated with different content appropriate for their age group. They know how to select the proper profile when watching and, though we tend to keep a close eye on what they’re actually watching, we can let them choose their own shows because we know it’s going to be a safe list of content to choose from.
  • Apple’s Family Sharing functionality – we’ve only recently started using this, as our oldest child has begun using her own device as opposed to relying on shared iPads. She doesn’t use it extensively yet, but does have her own Apple Music account on our family plan and is able to request permission to download apps, which I can then grant on my phone. We still typically have a conversation about the purchase or download in person first but the technology enables a seamless execution once we’ve agreed in principle. Our kids also get access to their TV shows and movies which I’ve purchased on my account in the same way.
  • Picniic – this is an app I came across recently when the firm’s PR reps reached out to me following a column I wrote on smart home assistants. Though positioned as a smart home tool, what Picniic really represents is a smart family assistant. It’s one of the first apps I’ve come across which actively seeks to solve problems for families and that’s refreshing. It allows families to share calendars, meal plans, grocery lists, and so on, representing a sort of virtual noticeboard or refrigerator door. I haven’t used the app extensively yet – I suspect it’s more useful for those families with hectic schedules and children being ferried to and from music lessons and soccer games, something our kids are mostly too young for at this point. But I can see the utility and admire the focus on helping families.

There are also lots of general purpose technologies which families can leverage, from Skype to texting to shared cloud-based calendars. I put out a request on both Twitter and Facebook to ask what technology families were using to help them connect and communicate and much of it was in this generic category.

Some requests

However, I think the industry can still do better and there are opportunities for innovators to meet needs currently unmet. As I asked about how families use technology today, I also asked what more could be done. Based on those responses and my own thoughts, here are some requests:

  • Better device sharing – Google and Amazon have both done some interesting things here but Apple in particular still doesn’t have a great way to share devices between family members such that the interface or the apps available are different on a per-user basis. As I mentioned above, the Family Sharing setup is great for sharing content between devices used by different individuals but there’s no equivalent for multi-user support on a single device (except in an education setting).
  • Learning about and making recommendations for families – I said technology is great at learning about and customizing experiences for individuals but there’s no equivalent for families. I also mentioned that my wife and I and our kids share two profiles on Netflix but Netflix isn’t really learning about us as individuals. Rather, it thinks we are these strange hybrid creatures who at once like weepies and action movies on the one hand and TV shows for toddlers and tweens on the other. I’ve seen some interesting demos of technology that combines individual preferences based on who’s watching but that’s about as far as it has gone.
  • More content for families – I’ve written previously about the TV industry’s lack of imagination when it comes to using the new-found freedom enabled by digital-first platforms to create content for families and I’d love to see more innovation in this area. But I’d also like to see more technological solutions for filtering and parental controls. VidAngel, a service my family has been using regularly, provides filters to remove swearing and other objectionable content from TV shows and movies so we can watch them as a family but was shut down (hopefully temporarily) by a judge this week. Content and technology companies are still often far too user-hostile when it comes to content and families.
  • More apps for families to use together – we’ve enjoyed a number of apps, especially on the new Apple TV, which re-create the old board game experience for a digital age. But there are still few of these relative to games intended for solo use, or games which are too violent for me to want to share them with my kids. I feel like the industry is making progress but there’s more to be done. It’s worth noting that many board games cost upwards of $20, which leaves plenty of price umbrella for digital competitors to squeeze under.
  • Apps to help manage families – I mentioned Picniic, which is notable for being one of a very few apps that really seek to serve families and help them manage their time and activities. But there’s room in the market for more than one such app. We could see plenty more innovation here.

I’m generally optimistic when it comes to technology – I’m far from a Luddite and technology is both the focus of my work and a massive enabler of what I do. I also use technology heavily within my family for all kinds of things. But the benefits to families have so far been mostly incidental, rather than a result of deliberate efforts to help and serve families and that’s something that could stand to change. Whether it’s the big platform and device companies putting more effort into all of this or startups launching apps or devices to help, I’d love to see more innovation in this area.

Microsoft’s AI Is Not Just About Being Smart

On December 13th at a very announcement-packed event in San Francisco, Microsoft shared its views of Artificial Intelligence and the progress it has made thus far.

Back in September, Microsoft created a new AI and research group of about 5,000 people under the leadership of Harry Shum and they have certainly been busy. Microsoft announced several different AI initiatives during the event:

  1. A new chatbot called Zo.ai that is integrated into messenger app Kik
  2. Cortana Devices SDK so Cortana can be on all kind of devices including what looked to be an Echo-like smart speaker manufactured and branded by Harman/Kardon
  3. Calendar.help that has Cortana looped into our email to schedule meetings so she can do it for us
  4. New Calling capabilities for Skype bots and the ability to include rich media
  5. Microsoft Translator Live that lets you have a real time conversation with people who speak different languages.

Some key milestones were also shared at the event:

  1. 67,000 active developers using Microsoft Bot Framework
  2. Chinese Bot Xiaoice and Japanese bot Rinna measure conversation per session that average 23
  3. Record length on an interaction with Zo was nine hours and 53 minutes

The list of achievements is significant but it’s the picture that develops when looking at them in total that really shows how invested Microsoft is in AI.

Microsoft Has Yet to Capitalize on the App Economy but is Making Strides with Bots

Nobody would argue that Microsoft missed the whole “app economy” craze. While Windows Phone caught up with iOS and Android from a technology standpoint, the low market share left developers with little interest in developing for the platform. Universal apps improved the landscape a little, as they allowed developers to maximize their effort by having their apps run on different devices, but the gap is still there.

Microsoft seems to have learned from its mistakes and is making sure it will not be left out of the next craze that is chatbots. Many have talked about chatbots as the next killer app and, while I do not see them as such, there is something to be said for some of the roles played by apps today being replaced by bots. For example, a travel chatbot helping you book your vacation in the same way you currently do through an app. So, while Windows might not have the best travel apps, it will allow users to get the job done through bots.

Long term, bots will certainly make the app gap a non-issue but the transition will not happen overnight and Microsoft needs to ensure users will be engaged on Windows in the first place. Having 67,000 developers actively engaged with the Microsoft Bot Framework is encouraging to see as was the long list of services already taking advantage of the framework. A crucial point Microsoft made during the presentation is the need for these chatbots to be freed from the messenger apps where they now reside. Clearly an easy starting point, given the need to encourage interaction with bots similar to the ones we have with a real human being. Bots need to be free to show up in email, on the web and other places we visit every day. Ultimately, bots need to follow me wherever I might need them.

Amazon and Microsoft Share Their AI Approach and Goal

Microsoft, like Amazon, talked about democratizing AI by allowing everyone to integrate Cortana and Alexa in their devices. This is because both want to be the underlying platform of preference for the AI revolution. Ultimately, the revenue that will be generated by empowering other hardware and services will be so much greater than what homegrown hardware could ever generate. In a way, this is no different than what Microsoft did with Windows and PCs. Windows became the platform for the computing revolution and now the Microsoft Bot Framework and Cortana will become the platform on which the AI revolution will be built. Let’s not forget that AI needs constant feeding of data, constant learning which will come from different use cases, on different devices, in different circumstances.

Conversational AI’s Will Wow Consumers….Eventually

Microsoft also talked about how AI cannot only be about IQ but how, in order to create a true bond between humans and digital assistants or bots, AI needs a strong EQ.

I could not agree more with that philosophy. I have spoken before about how Amazon has been able to create a strong bond with Alexa by just giving her a name while Google, with Google Home, has created a certain distance between user and machine. Personifying your assistant helps create that bond but having an assistant that understand emotions, nuances in vocabulary, tone, expressions – if I can be seen – changes the interaction at its core.

Microsoft, is of course, not alone in wanting to focus on natural language in order to create a conversation. Google has been talking about this very topic when showing off Google Home. Yet today, in most of our interactions, the experience feels quite transactional. I ask a question; I get an answer. If I do not get the right answer, I might ask again but that is pretty much it. This is not really how conversations happen.

Personality is a big part of EQ and, if Microsoft wants to build a true conversational AI, it has to focus on creating personalities for Cortana and its bots but also teach them how to speed-read people so they can adapt that personality.

Zo.ai is the successor to the short-lived Tay.ai experiment. Tay was trying to replicate a millennial and very quickly we were exposed to what happens to a millennial’s personality when social media tries to push some buttons.

I interacted briefly with Zo and I have to admit I was not impressed. In her defense, though, she must have been thrown off by the fact that someone who she clearly thought could be her father – she told me she was 22 and that older men do everything better – was conversing with her. What is important is that each and every interaction we have makes Zo more aware. I see Zo going through a mix of biosensor stimulation and socialization so she can learn to cope with different situations in the future.

Delivering Value Now!

All of this will take time and Microsoft is smart in delivering value now so we, as users, start to build our trust with simple tasks and conversations. Microsoft Translator Live and Calendar.help are two great examples of Microsoft using AI to take pain points away from our day to day lives. Having a three-way live conversation with people who do not share a common language is something that, if you have ever worked on an international team or married someone of a different nationality, you can easily relate to. The nightmare of checking availability to set up a meeting with multiple people is also something many of us have experienced. These might be seen more as tasks but they make a big difference in our lives. For me, it could be as simple as allowing my daughter to have a rich conversation with my mom; something I would see as extremely valuable and would create an immediate reliance on Cortana or whoever else is empowering that experience.

The difficulty I see for Microsoft and other AI players is to make sure users connect the dots and see that bots, assistants,  translators, calendars, maps, and the list goes on, all share a common brain and that to talk about AI is not the same as to talk about a personal assistant.

The Workplace of the Future

To no one’s surprise, how and where we work matters to people. Not just the company you work for, but the physical environment, the culture, the people, and the tools you use to get things done.

Intuitively, that’s obvious of course, but when you start to dig into exactly what it is that people do at work, where they work and what they use, you start to see a fascinating picture of current workplaces—as well as where they’re headed.

That was exactly the intention of the latest TECHnalysis Research study—fielded to over 1,000 US employees across a range of industries during the past week—and I’m pleased to report that the results do not disappoint.

At a high level, people only spend about 46% of their average 43-hour work week in a traditional office or cubicle environment. We’ve been witnessing a shift away from those workspaces for a long time, but the move is likely to accelerate as most workers believe that the percentage will drop to just under 41% in two years.

What’s surprising, however, is that the biggest increase won’t be coming from trendy new alternative workspaces or other non-traditional worksites. Instead, it’s working at home. Toiling in your PJs (or whatever attire you choose to wear at home) is expected to jump from 11% of the total work week to 16% in two years.

Directly related is the growing importance of work time flexibility. In fact, when asked to rank the importance of a company’s tech-initiatives that keep employees happy and productive at work, the number one choice on a rating of eight alternatives was work time flexibility.

Not surprisingly, when people were asked in a separate question about the benefits of working at home, the top reason they cited was—you guessed it—work time flexibility.

Clearly, the move to mobile computing devices, more cloud-based applications, and internal IT support for enabling work from remote locations has had a large impact on employee’s expectations about how, when, and where they can work, and, well, there’s no place like home.[pullquote]The move to mobile computing devices, more cloud-based applications, and internal IT support for enabling work from remote locations has had a large impact on employee’s expectations about how, when, and where they can work, and, well, there’s no place like home.[/pullquote]

From a collaboration perspective, there have been a number of advancements around both software and hardware being used in various workplaces. As expected, usage of these various tools is mixed and interest for them can vary quite a bit by age. At a basic level, for example, email is still the top means of collaboration with both co-workers (39% of total communications) and outside contacts (34%), with phone calls second (25% and 32% respectively) and texting third (12% for both groups). Among 18-24-year old millennial workers at medium-sized companies (100-999 employees), however, social media with outside contacts was 12% of all communications versus only 6% for the total sample.

Collaborative messaging tools like Slack and Facebook’s Workplace still showed only modest usage at 4% overall, but again 18- to 24-year old millennials at medium sized-companies nearly doubled that usage at about 7.5%. More importantly, while 1/3 of total respondents said their companies offered a persistent chat tool like Slack, another 31% said they wished their companies did.

From a hardware perspective, 32% of employees said their companies had large interactive screens in their conference rooms (a la Microsoft’s Surface Hub, which the company just announced was being well received in the market) and another 31% are hoping to see something like that installed at their workplaces sometime soon.

Interestingly, the videoconferencing aspect of these and other devices also drew some distinct, age-based responses. About 25% of total respondents said they used video the vast majority of the time when making an audioconference call, but that jumped to nearly 40% for younger workers (under 44) at medium-sized companies. The group that found video more effective during meetings was actually the 35-44 group, both in medium and large-sized companies. In each case the Gen X and Gen Y’ers in that group found it more useful than both the younger and older employees.

Finally, one insight from the study highlights an IoT opportunity in today’s workplace. A technology that was widely requested was an app or service that would allow workers to individually adjust their personal work area’s temperature and airflow. While that could be challenging to achieve, there’s clearly an interest for companies willing to tackle it.

Today’s workspaces are in an interesting state of flux, with a lot of attention being placed on attracting and retaining younger workers. While data from this study clearly supports some of those efforts, the results also show that many of the more traditional methods of communication and collaboration still play a dominant role—even with younger workers. As companies move to evolve their workplaces and vendors adjust to create products and services for these new environments, it’s important to keep these basics in mind.

Fake Tech

There’s fake news, fake science, and now, “fake tech”. Fake tech is a term that came to mind while reading about the augmented reality startup Magic Leap. The company has raised $1.4 billion based on videos created to demo its technology. But new information has surfaced that indicates these videos were created using special effects, simulated by a New Zealand company that specializes in such things. While it’s not clear how real the company’s technology is, you could describe these simulated presentations as fake.

Then there’s Theranos, the health technology company that raised hundreds of millions of dollars for its fingerstick and microfluidics technologies that promised to revolutionize blood testing. The company’s value was as high as $9 billion before it was discovered that much of its technology was more wishful thinking than real. Apparently, its charismatic leader was able to persuade a number of luminaries to serve on its board while others, including Walgreen’s, made huge investments based on fake evidence or no evidence at all.

Because technology is often complicated and overwhelming to those without science or engineering training, potential customers and investors are not equipped to make knowledgeable assessments and therefore follow the crowd of believers, not wanting to be left behind.

But as many of us working in Silicon Valley know, there’s a propensity for entrepreneurs to take on tasks that may seem insurmountable, or even impossible, that can lead to real innovation and breakthroughs. Along the way, with the need to attract investment, employees and customers, it’s easy for the promises to get ahead of the reality. People want to believe and can easily fall prey to those leaders who may be better at promoting than the actual science.

In the case of Theranos and Magic Leap, there were early warning signs, such as the companies’ refusal to provide real demos. In both cases, the truth came out when former employees came forward with their stories. In the case of Theranos, an intensive investigation by the WSJ did much to undermine the company’s credibility.

I’ve also experienced fake tech on Indiegogo and Kickstarter. There are products described with seemingly impossible claims that can’t be verified by the host sites. So, anyone with a clever idea and a simulated video can raise money proposing an idea that’s impossible to do. Some may know it’s impossible but many don’t know what they don’t know and believe it can be accomplished with enough money.

In addition to these, there are more nuanced examples of fake tech practiced by major companies that rely on exaggerated claims to garner publicity and boost their stock. While perhaps not completely fake, they are a lot less than what they seem to be.

Uber claimed they were beginning to use driverless cars in Pittsburgh when, in fact, they were starting to test the cars with a professional driver at the wheel. Amazon announced last year they were going to begin delivery of packages using drones, yet it will be years away.

In these cases, the press jumped on these stories, encouraged by the companies’ professional PR people, skilled at creating headlines out of bits of truth, and playing to the strengths and weakness of gullible reporters. While perhaps not factually inaccurate, the results were closer to almost-fake tech.

What fuels fake tech is what fuels fake news — the need to create headlines that result in clicks, eyeballs and hence, dollars. The need to get above the noise and stand out in some way. Too often, there are reporters not trained in science or technology that fall for these stories without a critical eye. They too want to believe and, as a result, promote a story without understanding the nuances behind it.

What’s the solution? Good reporting by trained journalists that understand basic science. Reporters that have a skeptical eye who understand they can’t accept all they are told. The need to assess claims using industry experts without financial ties to the company or its investors. Reliance on industry analysts who have seen and heard it all before and are not taken in by unsubstantiated claims.