The Importance of the Quality Engineer

When a new product is introduced at a company event, it’s the executives, design engineers, and industrial designers that get the all of the credit. But behind every new product is engineers that focus on quality and reliability that rarely get much recognition.

Their role is not a glamorous job; it requires more discipline and less creativity than the designers. They often focus on the negative, trying to find problems with a product before it’s shipped. They’re also the ones that can require the designers to go back and redesign and delay a product’s introduction, subjecting themselves to lots of pressure.

Their job is to simulate the worst cases the product will experience before it’s shipped, taking on the role of the customer. They’re not the most popular members of the team; they are sometimes seen as a traffic cop. Design engineers focus on creativity and invention. The product is their baby and no parent wants to be told their baby is flawed or has a wart.

If a company’s executives or board wants to know how a new product is really doing, the quality engineers are the ones to ask. They have all of the statistics because their job is not only to be sure the product will last but to monitor how well the product is performing once it ships. They’ll tabulate the complaints, analyze the returns, and report back to the design team what needs to be improved.

Yet, in spite of their efforts to provide an objective account of a product’s performance before it ships, they’re occasionally overruled. Companies often ship products with the expectations of getting returns, based on a calculated decision. You just hope that doesn’t happen when safety is involved.

When Samsung shipped the Galaxy Note 7, both the initial and the redesigned versions, it’s hard to believe the quality engineers were happy and not overruled.

I’ve often thought how valuable it would be if we, as consumers, had access to the quality information before buying a new product. Our buying decisions would be much more informed.

We’d know the likelihood of a product needing to be repaired, see a list of the most frequent failures, and be able to make a clear comparison between competitive products before purchasing. But, of course, this information is closely guarded and rarely available. The best we can do is to consult things like Consumer Reports who tabulate experiences of their customers for a few category of products.

The next best alternative is to access customer reviews, surveys, and complaints from the web. While it’s impossible to determine the specific percentage of returns, there’s plenty of anecdotal evidence that can help. It’s easy to Google a problem we encounter and see if others have had similar problems.

A case in point. Almost a year ago, my daughter gifted me a Fitbit Charge as an incentive to be more active. But, after nine months, I’m on my fourth unit. The silicone rubber band on two units simply peeled apart, and a third unit had a defective battery that lasted less than a day. The products were never abused or used near water. While statistically surprising, I initially assumed I just had bad luck. When I mentioned this to others, two relatives and a friend, each of the three had to replace theirs several times. And, after searching through the company’s blogs and online reviews, I’ve found scores of users experiencing similar issues with many of the models.

Now, any product can expect to have a small percentage of defective units and, when you sell millions, the actual number of defects can be large. Typical return numbers for defects in the first year for a well-designed and manufactured product range from about half a percent to 2%. More than 3% is considered very high.

It’s hard to know what the true percentage of Fitbit defects are from this anecdotal evidence. Based on my experience, I bet it’s very high.

But considering the poor performance of the company’s stock in recent days, if I were a stock analyst, I’d want to understand how big a problem this is. Ask the quality engineers and you’ll get a better predictability of Fitbit’s fortunes than asking the CFO.

—————————
Fitbit offered this response when I asked them about their product quality issues:

The quality of Fitbit products and the health and safety of our customers are our top priorities. We conduct extensive testing and consult with top industry experts to develop stringent standards so that users can safely wear and enjoy our products. We also are committed to delivering a superior customer experience. We respond quickly when customers report issues and strive to work closely with them through our customer service channels to ensure their satisfaction. If consumers have any questions or concerns, they can contact us at help.fitbit.com.

While much of this may be true, their product reliability is not there yet.

The Election’s Impact on Tech Regulation

The Obama presidency and the FCC, under Chairman Tom Wheeler, have been among the more activist and ambitious in recent memory. There have been some big victories — successful spectrum auctions, innovative spectrum sharing and 5G initiatives, the National Broadband Plan — and some acrimonious proposals, notably around network neutrality and cable set-top boxes. Justice has been a bit mercurial: opposing major consolidation in mobile and broadband (AT&T/T-Mobile, Sprint/T-Mobile, Comcast-Time Warner Cable), but allowing Charter’s acquisitions of TWC and Bright House and Comcast’s acquisition of NBC Universal.

In a few days, there will be a new President-elect and transition teams will begin strategizing for the post-January 20, 2017 world. What might be the impact of the election on comms and media regulation? I’ll start with a broad view and then drill down to a few of the more prominent items.

Of course, who is elected President will potentially have a significant bearing on tech. Hilary Clinton has a pretty detailed and well-articulated platform, with a particular emphasis on expanding broadband availability. She is likely to continue many of President Obama’s initiatives and priorities. If Secretary Clinton is elected, it is likely FCC Chairman Wheeler stays on until July or so. If she is true to form, expect FCC Commissioners and senior-level FCC staffing to take on a ‘FOC’ (Friends of Clinton, and by that I mean Hilary and Bill) flavor. On the other hand, President Clinton could signal intent to bridge gaps with the Republicans by ensuring a balanced FCC. The current FCC has three Democratic and two Republican commissioners, but the Democrats (especially Commissioner Rosenworcel) have not always been in lockstep with Chairman Wheeler.

If Donald Trump is elected, things are more of a wild card. To begin with, he has said little about this sector during the campaign and there isn’t much to glean from his policy platform. Trump is likely to be much more hands off than President Obama (who was very hands on). He will also be more pro-business and anti-regulation, which is why his off-the-cuff remark about the proposed AT&T-Time Warner deal was surprising. If he ends up being Delegator in Chief, his appointments could have an outsized influence.

What happens in terms of control of the House and Senate, as well as the overall post-election ‘tone’, will also be important. If the temperature remains highly acrimonious, there will be contention and delays in the naming and confirmation of senior staff. This could affect the process, prioritization, and timing of some big ticket items on the FCC’s docket. The FCC has a lot going on already. The AT&T-Time Warner deal will land at least partially on its plate, at minimum as an important litmus test for network neutrality.

Here is a quick run-down of some of the issues a new Administration is likely to face.

Network Neutrality — This is something President Obama strong-armed through the FCC. So far, the FCC been fairly hands-off in its application of NN. For example, allowing zero-rating services such as T-Mobile’s Binge On. The AT&T-Time Warner deal will be an important test, given AT&T’s current practice of zero-rating DTV content for AT&T subscribers and plans to do the same with the upcoming DirecTV NOW service. I think the FCC’s tone will continue to keep the application of NN at a high, “B to B” level. For instance, ensuring combined distribution and content companies (AT&T-Time Warner, Comcast-NBC Universal) do not discriminate against new media and OTT players (Netflix, Amazon).

Spectrum — There is a lot going on in the spectrum department right now. The new Administration is likely to inherit the 600 MHz auction – both its final rounds and its implementation. It’s a complex undertaking. Hot on the heels of that is the re-auctioning of DISH’s AWS licenses. The 3.5 GHz ‘shared spectrum’ (CBRS) initiative also has some important milestones hitting sooner rather than later, such as selecting and certifying the Administrators and coming up with an auction framework and other procedures. There is still some opposition to the FCC’s April CBRS Report and Order that the next FCC will have to address to keep this moving forward.

Another priority will be keeping the early momentum going on 5G. This is important from the standpoint of the US continuing its leadership in advanced wireless networks. A lot of innovation and work needs to happen to make the millimeter wave bands usable for commercial wireless services.

Broadband — The National Broadband Plan was one of the signature tech initiatives of the Obama presidency. Its implementation has been somewhat of a mixed bag. Broadband availability expanded and average speeds steadily improved. But there was also a lot of squandered money, the broadband market is not very competitive, and US average speeds are still very much middle of the pack.

A Clinton administration would be more likely to keep the broadband gravy train going. Fixed wireless/5G and deployment of large numbers of small cells will become a bigger part of broadband evolution over the next four years. The FCC’s Mobility Fund II, which is wireless’ version (and contribution) to the Universal Service Fund, is part of the equation, too.

Business Data Services — These are the FCC’s new ‘special access’ rules, which would impose price caps on what telecom companies can charge other companies or businesses for bulk data connections — often referred to as backhaul. The FCC is racing to get this done by the end of the year. BDS is important to the evolution of broadband and 5G, because bigger pipes are needed to deliver the capacity required by those services. Backhaul prices can be prohibitively expensive in un-competitive markets. If the FCC doesn’t vote on BDS by the Inauguration, its status could be in limbo and/or its proposed rules could be revisited.

Set-Top Boxes — The vote on this controversial Wheeler initiative to open up the set-top box market to competition has been delayed, after FCC commissioners could not come to an agreement. The FCC could try to get this done by the end of the year and before a new Administration takes power in January (although the expiration of Rosenworcel’s seat in December adds a tasty plot twist). But it is equally likely this will land in the new Chairman’s lap. The direction of the winter of 2017’s political winds, plus the new Administration’s having to deal with the AT&T-Time Warner deal, could affect the direction of this proposal.

M&A and Industry Consolidation — The approaching election has certainly been a factor in the accelerated pace of tech M&A activity during the second half of 2016. The new Administration will have to rather quickly deal with the proposed AT&T-Time Warner deal. This will be an important litmus test for future deals because it touches on many fractious issues that both the DOJ and the FCC will have to deal with more broadly: cross-ownership of assets, media consolidation, and the Internet’s impact on traditional distribution channels.

I also believe the wireless industry will revisit the consolidation issue early-ish in the new term. Sprint and T-Mobile could try again to get a deal done, or one of them could get acquired by a cable company. DISH is also a factor here.

Congress might be on hold between November and January, but the FCC still has a lot on its plate. From a tech industry perspective, Obama’s eight years were certainly active on the regulatory front, which no doubt angered those preferring a more hands-off public sector. But Obama’s initiatives, particularly with regard to broadband and spectrum, will be almost universally viewed as laudable. There is a lot—a lot—going on in our sector and, regardless of who wins on November 8, we will need some minimal level of government effectiveness to keep our fast-changing market moving forward.

The Mainstreaming of the Mac

There’s been lots of talk since Apple’s event last week about the reception to the new MacBook Pros, especially among the Apple commentariat. It’s fair to say the backlash against these new devices is stronger than for any MacBook announcement I can remember and yet it’s mostly coming from two particular sets of people – those who use heavy-duty creative applications such as Photoshop and those who develop for Apple platforms. This is easily Apple’s most vocal audience and so such a response must be at least a little disheartening. But it’s also worth remembering that Apple – and even the Mac in isolation – has long since gone mainstream and is bigger than these groups. Apple’s challenge now isn’t serving this hardcore base but pleasing the much larger mainstream Mac user base without alienating the power users.

Apple’s increasingly diverse base

I wrote a post a while back about the counterintuitive liability Apple has in its growing customer base. On the one hand, this customer base is a huge asset, especially given the upgrade cycles for devices like the iPhone and the ability to sell services to a captive group of users. But on the other hand, the increasing diversity of this base can also be a liability, because Apple now has to please many groups in a much less homogeneous base than in the past. The problem is the public image of Apple among many in the media and beyond continues to be of a company that serves mostly creative professionals. This perception has led to a lot of misguided commentary over the past week, both about the damage Microsoft’s Surface Studio could do to Apple’s Mac base and about the perceived shortcomings of the new MacBook Pro line.

Apple’s Mac base today

The reality is that Apple’s installed base of Macs today is likely around 90 million. That’s up enormously over the last fifteen years or so – it was around 25 million in the early 2000s. As that base has grown, it’s diversified considerably. Just visit any college campus to see row on row of MacBooks in lecture rooms and study halls. These aren’t creative professionals and they’re not even using their MacBooks for particularly resource-intensive tasks. But, of course, there are still creative professionals and Apple developers who use Macs for work. So it’s worth thinking about what percentage of the overall base these users might represent.

Here are some data points:

  • In 2013, Adobe estimated it had an installed base of around 12.8 million users of its Creative Suite software, with another 250,000 on Creative Cloud. Around 40% of this revenue came from what Adobe described as creative professionals, with another 25% coming from other creative people in businesses, 10% from creative people using it at home, and 25% from education
  • Adobe currently has around eight million Creative Cloud subscribers (this is how Adobe now sells its creative suite, including Photoshop, Illustrator, Premiere, and so on)
  • At WWDC this year, Tim Cook announced Apple had 13 million registered developers

If we put these numbers together, we get a picture of 8-13 million users of Adobe’s creative products and another 13 million or so Apple developers. Of course, of those Adobe users, a good chunk will be using Windows versions rather than Mac versions. At the absolute outside, though, it gives, at most, around 25 million total users in the two buckets that have been most vocal about the MacBook Pro changes, out of a total base of around 90 million, or around 28%. Realistically, that number is probably quite a bit smaller, perhaps around 15-20% of the total. Of these, not all will share the concerns of those who have been so outspoken in the past week. To look at it another way, Apple sold 18.5 million Macs in the past year, which might end up being roughly the same as the combined number of creative professionals and developers in the base.

In the end, the picture that emerges is of a base of Macs with the kinds of users that have been expressing concerns or frustration with the changes in the minority. The vast majority of the user base is in other categories, principally general purpose consumer and business users. How does the rest of the base feel about the new MacBooks? Well, of course, that base is much less vocal and less visible – the general purpose Mac user tends not to blog or host podcasts about Apple. They’re much more likely to quietly keep using the products they have and occasionally upgrade to something new. The best place to look for their feedback is sales numbers for the Mac. Those have been down a little lately as the existing Macs have been getting a little long in the tooth and those in the know have been waiting for upgraded machines.

However, Phil Schiller said this week online orders for the new MacBooks were higher than they’ve ever been for a new product before, suggesting that some of this pent-up demand is being released now. Mainstream users – and likely quite a few from among the professional class of MacBook users too – are buying this new product despite the misgivings some power users have. We won’t know until at least three months from now – and probably longer – the actual numbers on how these MacBook Pros are selling. But my guess is those sales numbers will suggest the mainstream base cares a lot less about some of the subjects of the criticism from the past week and a lot more about a decent bunch of spec upgrades, thinner and lighter hardware, and some interesting new features.

Keeping the pro base happy

Of course, Apple can’t simply ignore the professional base – though these users may be a minority among the overall set of Mac customers, they are an important segment an,d as we’ve already seen, a vocal one. Pleasing them is important in its own right but also as a way to influence broader perceptions of the Mac and Apple as a company. Apple likely needs to do more here to mollify this base. For starters, it needs to update the desktop Macs, especially the Mac Pro, quickly. The current version of the Mac Pro suffers from being less upgradeable than its predecessor. With that being the case, it requires hardware refreshes more – not less – frequently. It might also be a reasonable concession to the complaints from this base to make it more upgradeable. I suspect Apple will have to think hard about how to please those who want a portable yet ultra-powerful machine, which is really the even narrower segment that’s been criticizing the new MacBooks. The portability/power tradeoff it’s made in the new machines seems to be fine for the mainstream, but that’s the one thing that seems to be creating the most problems for the hardcore base and that’s worth addressing.

Touchscreen or No Touchscreen, That is the Question!

A lot has already been written about Apple’s Touch Bar for the MacBook Pro and how Apple should have just gone all in and actually added a touchscreen. I hinted on the day of the event that the Touch Bar could actually end up being more impactful than a touch screen and I would like to explain why.

Windows Touch Screens Were a Response to Mobile

I think it is important to look at why we have touch screens in the Windows camp.

Touch screens on Windows were not the result of a platform need. When we started to see hybrid devices running Windows, we were still on Windows 8, which was not optimized for touch. Nor were touch screens the result of an innovation aimed at changing the way we worked and interacted with content.

We got touch screens because Windows as a platform was trying to catch up to mobile.

With very little opportunity for growth in smartphones, and iPad at the high-end and cheap Android tablets at the low-end impacting PC sales, Windows PC makers wanted to fight back by adding the one function the world seemed never to get enough of. By adding touch to PCs, vendors were hoping to shift the downward trend in PC sales while decelerating tablet growth.

Then there was Surface. Microsoft started Surface because what vendors were releasing at the time was failing to compete with tablets. Consumers were not interested in buying a new PC and enterprises were still not sure they wanted to invest in the premium that touch was bringing to the new machines. Surely productivity did not need touch!

Not just about the hardware

Even Surface did not hit a home run the first time around. While it was the best hardware Windows had to offer at the time, the first iteration of Surface running Windows 8 was a less than optimal experience when using touch. The obsession of competing with the iPad was also giving way to confused products like Surface RT.

Fast forward to today and you have Surface Pro 4 running on Windows 10, offering a full computing experience in a versatile form factor with an OS that runs well with using both touch and keyboard.

Looking at hardware alone, however, is not enough to understand how far a device can go when it comes to bridging PCs and tablets. Apps have been key in tablets. So much so that the market has been clearly split in two: a high-end that is dominated by iPad, where there are over one million dedicated apps, and a low-end market where Android tablets reign supreme mainly as content consumption screens.

Windows based 2-in-1s, Surface included, suffer from the lack of touch-first apps that would help move the needle in adoption and, most of all, with engagement and loyalty. It is for this reason that seeing Microsoft invest in first party apps is so refreshing. Microsoft is delivering value and hopefully showing the potential to developers even with both apps and new devices such as the Surface Dial. In an interview with Business Insiders, VP of Microsoft Devices, Panos Panay said something I could not agree more with: “The entire ecosystem benefits when we create new categories and experiences that bring together the best of hardware and software.” 

Meanwhile, across the fence, the Mac OS store has not captured developers in the same way the iOS Store has. The prospect of being able to reach hundreds of millions vs. tens of millions of users has kept a lot of developers focusing on iPhone and iPad.

Adding touch support for macOS Sierra might have left users not much better off than they were before. I assume developing for the Touch Bar is much easier than designing a brand new app for Sierra optimized for touch, which ultimately would result in a better experience for the user.

The “I need a keyboard” argument

Clearly, Apple did not just do the Touch Bar because it was easier to develop for. Apple continues to maintain that vertical touch is not the right approach. Many disagree because the extensive use of touch is getting us more and more often to reach out to touch our screens. Yet, when we touch our screens, we generally want to scroll or select. We really do not want to do complex things which begs the question, why can’t we do it on the trackpad we have on our keyboard? We can discuss this point till the cows come home and we will find pros and cons on both sides.

So let’s look at this point a little differently. There are two main reasons why someone buys a MacBook Pro today: OS and the keyboard. Rightly or wrongly, many people still think iOS is not a “full OS” – another point we can discuss till the cows come home. But the keyboard is key.

If the keyboard is so important for these users, it seems fitting Apple focused on making that experience better. In a recent interview for CNET, Jony Ive said:

“Our starting point, from the design team’s point of view, was recognizing the value with both input methodologies. But also there are so many inputs from a traditional keyboard that are buried a couple of layers in…So our point of departure was to see if there was a way of designing a new input that really could be the best of both of those different worlds. To be able to have something that was contextually specific and adaptable, and also something that was mechanical and fixed, because there’s truly value in also having a predictable and complete set of fixed input mechanisms.”

Taking touch and contextualizing it to the keyboard to make gestures, steps, and functions more natural, immediate,and precise makes a lot of sense to me. As often with Apple, you get what you asked for but not in the form you thought you wanted it.

What Does This Mean for the Future?

For Apple, it means it is serving two different audiences that think of computing in different ways. Apple will do so for as long as it will take for MacBook users to be convinced the iPad Pro and iOS 10 represent the next computing platform.

For Microsoft, it is about focusing on the larger and longer term shift that will see Mixed Reality play a big role in the way we interact with devices, the way we do business, and the way we learn. Microsoft is making sure it is shaping its own path rather than finding itself blindsided and left to scramble as it did with mobile.

It’s Time for an IoT Security Standard

The writing has been on the wall for some time. Worse, the recent DNS attack that brought down portions of the Internet strongly suggest that previously predicted concerns have become unpleasant realities.

The problem? Security, or the lack thereof, for the billions of things getting connected to the Internet. Unfortunately, enormous percentages of smart home security cameras, connected DVRs, industrial equipment controllers, wearables, medical equipment, cars, and many more devices are being put online with little to no security protection.

As a result, many of these devices are subject to hacking, in some cases, with potentially life-threatening results. And to make things worse, many are also vulnerable to be unwillingly overtaken and silently re-used in other types of cyber-attacks, like the DNS attack that rendered many popular web sites unreachable a little over a week ago.

This nearly complete lack of security has been talked about by some tech industry observers for years. But despite all the talk, little real action is being taken on an industry-wide basis.

Given the seriousness of the problem and its potential impact not only on our daily lives, but also on the security of critical infrastructure and even national security, it’s surprising and somewhat shocking how much inaction there has been. After all, devices that plug into the wall to get power require approval before other companies will sell them in the US, so why shouldn’t any device that gets “plugged” into the Internet require an approval process as well?[pullquote]Devices that plug into the wall to get power require approval before other companies will sell them in the US, so why shouldn’t any device that gets “plugged” into the Internet require an approval process as well?[/pullquote]

Many of the early electrical safety certification tests developed by UL (previously Underwriters Laboratories) were developed for the safety of consumers, but the impact on electrical power utilities was likely considered as well. In the exact same way, IoT security standards need to be developed both for the safety of an individual using a device, as well as the potential impact on the newest utility in our lives: the Internet.

To be fair, not all IoT security issues involve the possibility of immediate physical harm that electrically powered devices have, but some do. Plus, the potential societal disruption and associated physical threats that an IoT-driven security problem can cause could be much more widespread than any individual device could create.

Of course, the challenge of creating any kind of security standard is determining what exactly would be included and how it would be measured. Security is a significantly more complicated and nuanced topic than the spread of an electrical charge, but that doesn’t mean the effort shouldn’t be undertaken. It’s just going to take a lot more effort from more people (and companies).

Thankfully, there are several efforts being driven by individual companies to help address some of these security concerns. Chip IP company ARM, for example, whose technology is at the heart of an enormous number of IoT devices, recently added new levels of hardware security to its line of Cortex M microcontrollers. In addition, concepts like a hardware root of trust, trusted execution environments, biometric authentication and more are all being actively deployed by a variety of component and device vendors that feed into the IoT supply chain. While they won’t solve all security issues, leveraging these technologies as a starting point would seem to be a pragmatic approach.

In addition to setting those requirements, determining who administers the testing would have to be resolved. Logically, companies like UL and other members of the Nationally Recognized Testing Laboratories (NRTL) Program would be good choices. A strongly related development would also have to come from those companies who sell and/or install these types of devices. Technically, UL approval is not required to sell a device in the US, for example, but practically speaking, retailers and others who sell these devices are unwilling to accept them without some kind of approval for fear of potential insurance risks. An IoT security standard would require a similar level of support (and initial willpower) to be effective.

It’s certainly naïve to think that a single type of security standard could possibly stave off all the potential security threats that IoT devices are now raising. But it’s equally naïve to believe that nothing can or should be done about the problem. The task won’t be easy and early iterations may not be great, but it’s clear that the time has come to do something. Let’s hope some industry associations and other parts of the tech ecosystem have the guts to get an IoT security standard started and the will to stick it out.

Why the MacBook’s Pro’s Touch Bar Matters

How many of you remember the role macros played in the early days of the PC? Macros are basically shortcuts to set up an often used spreadsheet or to add a set of database formats that would be employed for repetitive data input. These were very important during the days when Microsoft’s DOS ruled the PC world. Even today, programmers use macros all the time and power users still create macros for use in various types of apps where they are still supported. However, most users don’t even know these shortcuts exists or, if they do, consider them too difficult to find and use.

Then Apple introduced the Mac with its GUI. This, and the next generation of graphical user interfaces, made navigating through apps much easier. Also, these GUIs introduced an updated version of cut and paste that, in many, ways allowed a person to do similar things macros did when it came to interjecting what, in binary code, is just mathematical equations used over and over within an app of some type.

The first time I saw and played with the Touch Bar on the new MacBook Pros, the concept of the macros of the past popped into mind. Like many who read this column who are power users, we all know the value of creating macros and applying them to our apps to speed up a particular business process. What Apple has done through the Touch Bar is basically deliver this kind of functionality and gives the power of macros to the masses.

Another way to look at this is to look back on Apple’s influence on UIs in the past. With the Mac, they introduced the graphical UI and the mouse. This advanced the user interface of man-to-machine dramatically. With the iPhone and the iPad, they introduced touch, something that is now mainstream in UIs for smartphones and tablets and even PCs and laptops. But Apple’s philosophy on touch does not extend to laptops and iMacs for one key reason. Jobs always believed, right or wrong, that, when your hands were on the keyboard, the best position for input was via a keyboard and mouse. He felt the motion to take the hand from the keyboard and move it to the screen as part of navigation was unnatural. Although adding a keyboard to the iPad breaks with this view, this is more a function of the iPad’s design and many of us who use iPads with keyboards would love a mouse to use with it. An interesting side note to this comes with Microsoft’s Surface tablet. Most of my friends who use it with a keyboard also carry a mouse as they don’t like using their fingers as the touch input is not as precise as one can get with a mouse.

The Touch bar is an important evolution of Apple’s contribution to user interface design. It brings the functions of power user macros to the general user by demystifying the concept of shortcuts for repetitive tasks and adds easy to use and fast access to all types of functions within applications that will support it. This is why the Touch Bar matters. Once people start using it, this will be viewed as a logical next step in UIs for laptops and we will want this on other Apple laptops and desktops as well.

This will be especially true as the software community embraces this new feature and uses Apple’s APIs for the Touch Bar in their own applications. At the Apple event, they showed us the tip of the iceberg for its use on their own applications and ones from early partners like Adobe. But Microsoft plans to support the Touch Bar APIs in all of their Mac applications and, by early next year, we should see thousands of macOS apps supporting it. This gives Mac users a new way to speed up navigation and access within apps on a portable computer and enhance the laptop experience significantly.

As for the new MacBook Pro’s design, I believe it will be a big hit for Apple’s high-end customers and the entry product, that still has the older function keys, will be targeted as a replacement for the MacBook Air, especially in the enterprise. I have played with this model for about four days now and can see this as a great replacement for the Air.

I am concerned with the new MacBook Pros pricing but, to be fair, if this meets the needs of their pro customers, it will still sell well. However, if Apple’s new Touch Bar is the next evolution of Apple’s contribution to mobile user interface design, I suspect the Touch Bar will eventually be on all laptops Apple brings to market over time and, if history is our guide, the prices of these MacBook Pros will also go down in next generation models.

I realize there will be naysayers who will be skeptical about the role the Touch Bar plays in the future. But if there is one thing I have learned over the 35 years of covering Apple it is that, when they put a lot of thought and detail into user interfaces, it is best to take note. That is why I believe the Touch Bar not only matters but it and the special chip they use to power it will have more of an industry impact than most can see right now.

Apple’s Future Is Ear

Apple’s Transition From Looking and Touching to Listening and Talking

Part 1: Looking Back

Angst

As you may well know, there’s an awful lot of angst concerning Apple’s removal of the headphone jack from their latest model iPhones.

Every new idea has something of the pain and peril of childbirth about it. ~ Samuel Butler

I won’t rehash it all other than to say that a lot of people — and I mean a LOT of people — disagree with Apple’s decision to remove the headphone jack from the recently released iPhone 7. And when I say a lot of people disagree with the removal of the headphone jack, I mean they VEHEMENTLY disagree.

[pullquote]Taking the headphone jack off the phones is user-hostile and stupid[/pullquote]

Taking the headphone jack off phones is user-hostile and stupid. ~ Nilay Patel

Wow. Strong words.

Don’t worry about people stealing an IDEA. If it’s original, you will have to ram it down their throats. ~ Howard Aiken

If you’re going to sin, sin against God, not the critics. God will forgive you, but the critics won’t. paraphrasing Hyman Rickover

So, are the critics right? Is Apple doing their customers a disservice?

New ideas come into this world somewhat like falling meteors, with a flash and an explosion, and perhaps somebody’s castle-roof perforated. ~ Henry David Thoreau

Or is it we who are doing Apple a disservice?

Our dilemma is that we hate change and love it at the same time; what we really want is for things to remain the same but get better. ~ Sydney J. Harris

Looking Back

Before we try to answer the questions posed above, let’s first take a step back in order to gain a broader perspective.

Acquire new knowledge whilst thinking over the old, and you may become a teacher of others. ~ (probably not) Confucius

I believe that the past can teach us a lot about the future.

The further back you look, the further forward you can see. ~ Winston Churchill

So before we discuss the removal of the headphone jack and the viability of Apple’s new Bluetooth AirPods, let’s first take a look back on some computing history.

Smaller, Ever Smaller

Since the advent of the Apple II and the rise of the mass-market consumer PC, you hear “computer” and you think “monitor, mouse, keyboard,” in some variation. ~ Matt Weinberger, Business Insider

cut1_quwaaao__c

But that’s not the way it’s always been.

Computers have gone from Mainframes that took up an entire floor, to Minis that filled an entire office, to PCs that sat on desktops, to Notebooks that laid on our laps, to Smartphones that rested in pockets, to watches that wrapped around our wrists. I can’t be the only one who sees the pattern here. Every generation of computer has gotten smaller and smaller. And that trend is not going to stop. It’s not a question of “if” computers are is going to get smaller, it’s only a question of and “when.”

Well, let me correct myself. It’s not only a question of “when” computers will get smaller, it’s also a question of “how.” Making computers smaller is relatively easy. Making them smaller while maintaining their usefulness is not so easy and does, in fact, pose a significant challenge.

The Windows Mobile Mistake

We may not know what the next User Interface should be, but we know what it shouldn’t be. It shouldn’t be a smaller version of the current User Interface.

Remember how Microsoft tried – for years and years and years – to squeeze its desktop User Interface into tablets and phones? Nowadays, we look back and mock Microsoft for those early, lame attempts to create a modern phone interface. But that smug point of view is simply retroactive arrogance. We now know what Microsoft should have done, so we’re astonished that they too didn’t employ the same 20/20 hindsight that we now do. But at that time — although almost none of us liked the tiny menus or the easy to lose styluses — no one had a better idea.

Not only that, most of us didn’t even know that a better idea was needed.

Unique User Interface

So a smaller version of the current User Interface provides a bad experience for the User. What then is the solution?

The solution, of course, is a brand new User Interface. It turns out that each successive generation of computer requires its very own unique User Interface — a User Interface specifically tailored to work with the new, smaller form factor.

Unfortunately, creating a brand new User Interface is easier said than done, in part because it’s extremely counterintuitive. In hindsight, all the best user interfaces look obvious. In foresight, those self-same user interfaces look like obvious failures.

Macintosh

Take, for example, the User Interface employed by the Macintosh.
cvkg4uvwgaa0icx
The User Interface of the Macintosh was soon to become the standard for desktop PCs, with many of its features still in use today. But at the time, Xerox — which created several of the building blocks for the soon-to-be Macintosh User Interface — didn’t know what they had.

When I went to Xerox PARC in 1979, I saw a very rudimentary graphical user interface. It wasn’t complete. It wasn’t quite right. But within 10 minutes, it was obvious that every computer in the world would work this way someday. ~ Steve Jobs

Yeah, Steve Jobs instantly saw the promise of what was to become the Macintosh User Interface…

…but most of us aren’t Steve Jobs. Like the engineers at Xerox, we don’t recognize the value of a new User Interface even when we’re looking right at it.

By the way, anyone who thinks that Steve Jobs and Apple “stole” the UI form Xerox, needs to read this article and see this video.

iPhone

Another example of not knowing what we had was the iPhone. I think most everyone would now agree that, with the introduction of the iPhone, Steve Jobs and Apple knocked the Smartphone User Interface out of the park. But that’s not how people saw it at the time.

Some thought the iPhone was an embarrassment to Apple:

Apple should pull the plug on the iPhone… What Apple risks here is its reputation as a hot company that can do no wrong. If it’s smart it will call the iPhone a ‘reference design’ and pass it to some suckers to build with someone else’s marketing budget. Then it can wash its hands of any marketplace failures… Otherwise I’d advise people to cover their eyes. You are not going to like what you’ll see. ~ John C. Dvorak, 28 March 2007

Internet commentators were no more impressed with the newly announced iPhone than were the press:

Im not impressed with the iPhone. As a PDA user and a Windows Mobile user, this thing has nothing on my phone..i dont see much potential. How the hell am I suppose to put appointments on the phone with no stylus or keyboard?!…No thanks Apple. Make a real PDA please….

lol last i checked many companies tried the tap to type and tap to dial … IT DOESNT WORK STEVIE, people dont like non-tactile typing, its a simple fact, this isnt a phone its a mac pda wow yippie….I mean it looks pretty but its not something i forsee being the next ipod for the phone industry…

im sorry but if im sending text messages i’d rather have my thumb keyboard than some weird finger tapping on a screen crap.

Touch screen buttons? BAD idea. This thing will never work.

Apparently none of you guys realize how bad of an idea a touch-screen is on a phone. I foresee some pretty obvious and pretty major problems here. I’ll be keeping my Samsung A707, thanks. It’s smaller, it’s got a protected screen, and it’s got proper buttons. And it’s got all the same features otherwise. (Oh, but it doesn’t run a bloatware OS that was never designed for a phone.) Color me massively disappointed.”

And, of course, even years after the iPhone appeared on the scene, competitors continued to overlook its significance:

Not everyone can type on a piece of glass. Every laptop and virtually every other phone has a tactile keyboard. I think our design gives us an advantage. ~ Mike Lazaridis, Co-CEO, Research In Motion, 4 June 2008

So globally we still have the world running on 2G internet. Blackberry is perfectly optimized to thrive in that environment. That’s why the BlackBerry is becoming the number one smartphone in those markets. ~ Mike Lazaridis, Co-CEO, Research In Motion, 7 December 2010

Lessons

The lesson here is fourfold:

First, new user interfaces are hard. Really, really hard.

Second, we often don’t realize a new user interface is even needed.

Third, each user interface is unique — radically different from the User Interface that preceded it.

People should think things out fresh and not just accept conventional terms and the conventional way of doing things. — Buckminster Fuller

Fourth, even when a new user interface is introduced, and even if it ends up being the perfect solution in the long run, in the short run it’s not met with cries of “Thank goodness you’ve arrived!” No, it’s met with scorn, derision and dogged resistance.

Why Bother?

Before we go any further, I guess we should ask ourselves: “Why do we even need smaller computers that require a new User Interface anyway? Smartphones are great, right?”

Well, yes and no.

Smartphones are wondrous supercomputers that we carry in our pockets and which can solve a multitude of problems. But for some tasks, Smartphones are far from ideal.

One problem with Smartphones is that they are demanding. They cry out for our attention. They buzz, they beep, they ring, they flash, they vibrate. They call to us, “now, Now, NOW! Pay attention to me now, damn it!”

“The word ‘now’ is like a bomb through the window, and it ticks. ~ Arthur Miller

Another problem with Smartphones is that they are intrusive. To interact with a Smartphone, we must look at it. When our focus is on the Smartphone, our focus is off everything else.

This can be socially awkward when we hunch over our Smartphones and ignore those around us. It can be amusing as we watch those using Smartphones bump into walls or walk into water fountains. It can be deadly as we walk into traffic while staring at our Smartphones or foolishly attempt to text while driving.

Part 2: The Next User Interface

Just as we needed a brand new “touch” user interface in order to turn smaller Smartphone form factors into a usable computing device, we now need a brand new User Interface in order to turn even smaller computer form factors into usable computing devices.

— PCs used a mouse and a monitor;
— Notebooks used a trackpad and a monitor;
— Smartphones used a touch-sensitive screen and a monitor; and
— Watches are still a work in progress, but they currently use a variety of interfaces , like touch, 3D touch, Digital Crown, Taptic Engine…and a monitor.

But this presents us with a new challenge. Ever since the Apple I was introduced in 1977, every User Interface has had one thing in common — a monitor. But usable screen sizes have gotten as small as they can get. How then do we make a computer both smaller AND more usable?

Google Glass

Google Glass was an early attempt at creating a User Interface suitable for a smaller computer form factor. It solved the screen size dilemma by resting the screen on one’s face like a pair of glasses. It used augmented reality to superimpose bits of helpful information over the world as viewed through a small camera lens. The vision was for the device to always be present, always be watching, always be listening, always be ready to assist with some digital task or to instantly recall some vital piece of information. People had very, very high hopes for Google Glass.

Google glasses may look and seem absurd now but (Brian) Sozzi says they are “a product that is going to set the stage for many other interesting products.” For the moment, at least, the same cannot be said of iPhones or iPads.” ~ Jeff Macke, Yahoo! Breakout, February, 27  2013

images-182

So did Google Glass “set the stage for many other interesting products”? Not so much. It failed so badly that it came and went within the span of three short years.

So what went wrong?

images-180

Well…other than that picture, what went wrong?

Google Glass was incredibly intrusive, both for the user and, significantly, for those in the presence of the user. From the outside, Google glass stood out like a sore thumb. From the the inside, Google Glass inserted itself between the user and the world.

Google Glass is in your way for one thing, and it’s ugly…It’s always going to be between you and the person you’re talking to. ~ Hugh Atkinson

Further, Google Glass was a pest, always bombarding the user with distracting visual images.

I don’t think people want Post-it notes pasted all over their field of vision….The world is cluttered up enough as it is! ~ Hugh Atkinson

Perhaps even worse was the way Google Glass intruded upon the lives of others. People resented the feeling that they were being spied upon and began to call those who wore the devices “Glassholes.”

Finally, Google Glass just wasn’t that useful. It didn’t do many things and the things it did it didn’t do all that well.

Enter The Voice User Interface

[pullquote]We’re moving from view first to voice first[/pullquote]

I’m convinced that Voice is going to be the next great User Interface; that we’re moving from touching and looking on our Smartphones to talking and listening on our Headphones; That we’re moving from View First to Voice First.

Most agree the next major UI shift after touch is voice. ~ J. Gobert (@MrGobert)

More importantly, I’m convinced that Apple is convinced that Voice is the next great User Interface…

…which is no big deal, because Amazon, Google, Microsoft and most others are convinced too…which is why they’re all investing so heavily in the area.

(D)igital assistants are poised to change not only how we interact with and think about technology, but even the types of devices, applications and services that we purchase and use. ~ Bob O’Donnell

The User Interface Company

Apple is a User Interface company. Their business model is to:

— Create a revolutionary new User Interface;
— Use design principles to build an integrated hardware and software product;
— Iterate the hell out of it;
— Carefully select another area of computing ripe for disruption; and
— Do it all over again.

Some past examples:

— The Apple I added a monitor.
— The Macintosh added the Graphical User Interface (GUI) and the Mouse.
— The Apple Notebook added the recessed keyboard and trackpad.
— The iPod added a click wheel and related all the heavy lifting to the Personal Computer.
— The iPhone and the iPad added a touchscreen.

In 1976, with the Apple I, Apple started the modern era of personal computing by adding a monitor to the User Interface. In 2016, Apple intends to extend the era of the personal computer by removing the monitor from the User Interface.

The new User Interface would be — as all User Interfaces must be — a radical transition. It would take us from touching to talking; from looking to listening.

The most interesting disruption comes from attacking an industry from what looks like an irrelevant angle. ~ Benedict Evans (@BenedictEvans)

Introducing The AirPod

Apple recently announced a line of wireless headphones, called AirPods. The AirPod appears to represent Apple’s vision for the visionless User Interface of the future. With advanced bluetooth audio, a powerful W1 chip, two microphones, and yes, the elimination of the 3.5mm audio jack, the AirPod is the beginning Apple’s transition from User Interfaces for the eyeballs, to a User Interface for the eardrums.

So what’s the big deal? We’ve had wireless headsets for a while. True enough. But they’ve been confusing to pair, frustrating to use, had limited battery life, and were, overall, relatively powerless. The AirPods are not just another set of headphones. Rather, they are the start of a whole new generation of headsets. The new AirPods provide:

— Painless Pairing;

— A Charging Case that stores, charges, and pairs the earbuds;

— Optical sensors, that that make the first earbud in the ear the primary earbud for phone;

— Sharing between two people.

— A long tube that provides room for a larger battery, thus providing longer battery life;

— Microphones at the end of the tubes which reduces the interference provided by our head and allows better ear-to-ear communication; and

— Activation of Siri either by saying “Hey Siri” or by double tapping on either of the earbuds.

Good Design

The AirPods’ simplicity and demure demeanor is consistent with the principles of good design.

Good design is unobtrusive. Products fulfilling a purpose are like tools. They are neither decorative objects nor works of art. Their design should therefore be both neutral and restrained, to leave room for the user’s self-expression. ~ Dieter Rams

The advance of technology is based on making it fit in so that you don’t really even notice it, so it’s part of everyday life. ~ Bill Gates

If it disappears, we know we’ve done it. ~ Federighi 9/10/13

Technology is at its best and its most empowering when it simply disappears. ~ Jony Ive

I like things that do the job and kind of disappear into my life. Like Levis. They just kind of get faded and disappear, and you don’t think about it much. ~ Steve Jobs

The Invisible Hand

[pullquote]Apple has a secretive project in the works named “Invisible Hand”[/pullquote]

Bloomberg has reported that Apple has a secretive project in the works that would dramatically improve Siri. Currently, the Siri voice assistant is able to respond to commands within its application. With an initiative code-named “Invisible Hand,” Apple is researching new ways to improve Siri. Apple’s goal is for Siri to be able to control the entire system without having to open an app or reactivate Siri. According to an unnamed source, Apple believes it’s just three years away from a fully voice-controlled iPhone.

Note that the report said that Apple thinks it is three years away form employing all of these features. Not today. Not tomorrow. Not the day after tomorrow, but three years. So don’t expect to see these advanced features anytime too soon.

Veteran Apple engineer Bill Atkinson — known for being a key designer of early Apple UIs and the inventor of MacPaint, QuickDraw, and HyperCard—saw this coming a long time ago. He gave a presentation at MacWorld Expo back in 2011 in which he explains exactly why the ear is the best place for Siri. ~ Fast Company

— AirPods don’t require we look and touch. They only require we talk and listen.
— The AirPod will be always with us.
— The AirPod will be always on us.
— The AirPod does not require the use of our eyes.
— The Smartphone stands between us and the world and demands our eyes and our attention. The AirPod stands behind us and discretely whisper’s in our ears.

“Yuck,” you say. “Always on?. Who wants that?”

We all will.

— We can have the AirPod in our ears at all times.
— We needn’t reach into our pockets to look at our Smartphones.
— We needn’t even turn our wrists and glance at our Smartwatches.
— Tap, tap or “Hey Siri.” Computing at our beck and call.

Apple has already started down this path. If you’re activating Siri using the home button of your iPhone, Siri more often directs you to look at the screen. If Siri is activated hands free via “Hey Siri”, Siri is more talkative and less visual.

Today And Tomorrow

The possibilities for Voice activated computing are endless.

Of course, you can request music or even a specific song.

A Voice Interface will allow us to listen to our emails and texts.

Driving directions might best be served by using both visual and audio instructions. But walking instructions — which are in their infancy, but on their way — are a different matter. It’s not a good idea to look at a screen while walking. Audio only instructions are the way to go.

Third party apps will have access to all of the the AirPods’ functionality.

Using a double-tap, the user can quietly request information.

Soon we’ll be able to identify a document, and simply say “print” and the artificial intelligence will do the rest.

AirPods will one day be spatially aware. They’ll remind us to take the mail with us when we leave the house, and to buy toilet paper when we pass by the local supermarket.

Soon we’ll be able to simply say “help” in order for the system to help the us navigate a particular task or application.

We don’t have immediate recall, but our AirPods — which hear everything — will.

Siri might soon be able to recognize people, know places, identify motions and connect them all with meaningful data.

Sensors in the device will know if we are in conversation and will break in only with the most important verbal notifications.

As Siri becomes more environmentally aware, it will start to recognize important sounds in the environment. For example, if the AirPods detect a siren while the user is driving, they might temporarily mute any messages or other audio.

Soon we’ll be able to request that our intelligent assistants ask someone else a question, get the answer, and then relay that answer back to us.

Or we’ll be able to schedule a meeting, with the artificial intelligence navigating all the various questions and answers required from multiple parties to make that happen.

Proactive assistance will remind us that we have an upcoming appointment and — knowing the time and distance to the meeting — prompt us to leave for it in a timely manner.

Or better yet, a truly intelligent device will know us and understand us and remind us when our favorite sports team is scheduled to begin play.

————–

“Science fiction,” you say? Really? Look how very far we’ve come since the iPhone was introduced in 2007. Just to take one small example, people forget that Smartphones were out for years and years and years before we were able to ditch our dedicated GPS devices and switch to using GPS on our phones instead. Now using the GPS on our phones is so normal that many can’t even imagine how we managed without it.

Apple opening up Siri, is like everything else they do. Building the momentum slowly and for the long haul. ~ Nathan Shochat (@natisho) 9/26/16

Like great men and women, great computing starts from humble beginnings.

Over the years, many of the most important features to come to the Apple ecosystem were launched as somewhat basic and rudimentary iPhone features.

• Siri told funny jokes.
• Touch ID unlocked iPhones.
• 3D Touch made Live Photos come to life.

In each case, a feature was introduced not to set the world on fire overnight, but rather to serve as a foundation for future innovation and functionality. Siri has grown from giving funny, canned responses to being one of the most widely-used personal assistants that relies on natural speech processing. Touch ID is now used to facilitate commerce with Apple Pay. 3D Touch has transformed into an emerging new user interface revolving around haptics and the Taptic Engine. ~ Neil Cybart, Above Avalon

Untethered

Reviewers who have worn the Apple Watch have written that it has untethered them from their phone — that their iPhone has joined the MacBook as the “computer in the other room.” That is all going to be doubly so with AirPods and other sophisticated headphones.

— Yesterday, people used their computers when they were at their desktops or when they carried their laptops with them.

— Today, people use their phones all the time and their second screen devices – Desktops, Notebooks, Tablets — some of the time.

— Tomorrow, people will listen to their headphones all the time and look at their phones, tablets, notebooks and desktops some of the time.

Not One Device, But Many

So, am I saying the Smartphone is going away?

Hell no.

Did the mainframe go away? Did the PC go away? Did the freaking fax machine — which was invented in 1843 — go away?

Old tech has a very long half-life. ~ Benedict Evans on Twitter

Bill Gates foresaw what was going to happen to computing as far back as 2007. Well, he ALMOST foresaw what was going to happen:

MOSSBERG: What’s your device in five years that you’ll rely on the most.

GATES: I don’t think you’ll have one device

I think you’ll have a full screen device that you can carry around and you’ll do dramatically more reading off of that – yeah, I believe in the tablet form factor – and then we’ll have the evolution of the portable machine and the evolution of the phone will both be extremely high volume, complimentary, that is if you own one you’re likely to own the other…

Reverse “phone” and “tablet” and Gates got it just about right. We’re not going to have just one device with just one user interface. We’re going to seamlessly move from device to device as best suits our needs at that particular time, at that particular place.

Part 3: Critiques

Fantasizing Fanboys

Nilay Patel thinks the Apple fanboys who are buying into the whole AirPod thing are as bad as Google fanboys who bought into the whole Google Glasses thing.

Watching Apple fans repeat the mistaken dreams of Google Glass is super fun. ~ Nilay Patel (@reckless) 9/16/16

I think Nilay Patel is a really, really smart guy who’s being incredibly, and inexcusably, short-sighted.

The most important things have always seemed dumb to industry experts at the beginning. ~ Jeff Bezos

Professional critics of new things sound smart, but the logical conclusion of their thinking is a poorer world. ~ either Benedict Evans or Ben Thompson ((Sadly, I don’t know who to attribute this quote to. My notes have both saying it and a search did not reveal the original source. My bad.))

The AirPod isn’t obnoxious, like the Google Glasses were. They aren’t building a barrier between you and me; between you and the world. And while Google Glass was incredibly intrusive and incredibly useless, AirPods are not intrusive at all while they’re incredibly useful today and will become even more useful with each passing tomorrow.

Echo Chamber

Many, many, very intelligent and respected technology observers really like the Amazon Echo and think it is the wave of the future.

I’ll admit I’m swimming in dangerous waters here — there have already been reports that Apple is working on an Echo-like device — but I don’t think Apple is going to go in the direction of the Amazon Echo. Based on Apple’s investor call, held on October 25, 2016, Tim Cook doesn’t think so either:

Most people want an assistant with them all the time. There may be a nice market for kitchen ones, but won’t be as big as smartphone.

Here’s my issue with the Echo and Echo-like products:

First, the Echo, and competing devices like Google Home, are fixed to one room. It makes no sense to have your Artificial Intelligence anchored to a single location when you can have it with you anywhere, anytime.

Second, the Echo and its lookalikes are designed to be used by multiple people. That’s convenient…but it also means that it muddles the information the artificial intelligence receives which, in turn, muddles the information that the Artificial Intelligence can provide. In other words, devices used by many people will not be able to provide data tailored for single individuals.

Many, many, many, many very smart, very thoughtful, very respected industry observers disagree with me.

When I wrote my original Siri Speaker article in March, I heard from a lot of people who didn’t understand why Apple needed to make such a product [as the Echo] when our iPhones and iPads and Apple Watch can do the job…It’s a very different experience to have an intelligent assistant floating in the air all around you, ready to answer your commands, rather than having that assistant reside in a phone laying on a table (or sitting in your pocket). ~ Jason Snell, MacWorld

[pullquote]What if your intelligent assistant were always in your ear and always with you?[/pullquote]

Well, that’s true, but what if your intelligent assistant were always in your ear and always with you?

Job To Be Done

There are those who argue that Voice Input may be a nice supplement to computing but a voice Interface is not sufficient because it’s inadequate — it doesn’t do everything that we can currently do on our Smartphone, or even our Smartwatch.

Maybe this’ll feel retrograde in a decade, but how many people really want to control everything with their voice? It’s handy for some stuff, but not everything…. ~ Alex Fitzpatrick, Time

Don’t we say the exact same thing at the introduction of every new generation of computer?

— The Notebook couldn’t do what the Desktop did.
— The Tablet couldn’t do what the Desktop or the Notebook did.
— The Smartphone couldn’t do what the Desktop, the Notebook or the Tablet did.
— The Watch couldn’t do what the Smartphone did.
— The AirPod can’t do what the Smartphone, or even the Smartwatch does.

OF COURSE the new device is not as good as the old device at doing what the old device did best. A Notebook computer is a lousy Desktop computer. A tablet is a lousy Notebook. A Smartphone is a lousy Desktop, Notebook or Tablet. And a passenger vehicle is a lousy truck. But we don’t hire a passenger vehicle to be a truck. Neither will we hire a device using a Voice-First Interface to be a Desktop, Notebook, Tablet, Smartphone or Smartwatch.

We don’t recognize the value of a new User Interface because we measure it against the wrong standard.

Lesson #1: The New User Interface is not trying to “replace” the old user interface.

Tablets will not replace the traditional personal computer. The traditional PC is changing to adapt to the customer requirements. The tablet is an extra market for some niche customers. ~ Yang Yuanqing, Chief Executive Officer, Lenovo Group Ltd., 11 Jan 2012

The above quote misses the mark because it assumes that tablets WANT to replace the traditional personal computer.

‘This new thing will be great – once we can do all the old things on it in the old way’ ~ Benedict Evans

Each new computer form factor is being hired to do something different than its predecessor, otherwise, we wouldn’t want or need to migrate to the new device in the first place.

[pullquote]The goal is to use the new device for something it can do extremely well, especially if that something is something the old device did poorly or not at all[/pullquote]

The goal is not for the new Interface to duplicate the functionality of the old Interface; to use our new devices to do what our old devices already do well. The goal is to use our new devices for those things that they do best.

Lesson #2: We shouldn’t judge a User Interface by what it CAN NOT do.

Instead of judging a new User Interface by what it can not do, we should judge it by what it CAN DO EXTREMELY WELL, especially if it can do something well that the old User Interface does poorly or not at all.

Before you can say ‘that won’t work’, you need to know what ‘that’ is. ~ Benedict Evans

Socially Awkward

Some observers say we will not want to use Voice First because — well frankly, because it makes us look like socially awkward nerds and sound like socially oblivious geeks.

I personally still feel self-conscious when I’m using Siri in public, as I suspect lots of folks do as well.

This kind of thinking is already passé in China.

(Voice may be awkward) in the US. 100% not true in Asia. Voice is dominant input method whether public or private. ~ Mark Miller (@MarkDMill)

(F)or certain markets, like China…voice input was preferred over typing. ~ Ben Bajarin (@BenBajarin)

But let’s forget for a moment that the social awkwardness we fear is already irrelevant to a minimum of 1.3 billion people. Even for those of us who live in the West, our fear of social awkwardness is — well — it’s a little bizarre.

Apparently, this is considered ‘normal’ looking:
cse5jsjwgaajjgf

…but this in considered abnormal and abhorrent.

cse5jfxwyaio6vj

Who knew?

Don’t fool yourself into thinking that resistance to the new AirPods is anything new. There has never been a meaningful change that wasn’t resisted by self-righteous, holier-than-thou, know-it alls — like me.

People are very open-minded about new things – as long as they’re exactly like the old ones. ~ Charles Kettering

You don’t believe that resistance to the new is the norm? Then I strongly suggest you follow Pessimists Archive @pessimistsarc. (Even if you DO believe me, I still strongly suggest you follow Pessimists Archive @pessimistsarc)

Here are just a couple the things that the guardians of goodness have deemed irredeemable:

— CELL PHONES: Don’t you remember when cell phones were considered anti-social?

And pretty dorky looking, too.

images-172

— WALKMAN: In the 1980s, in response to the Walkman, a town in New Jersey made it illegal to wear headphones in public. That law is still on the books today.

— RADIO: A 1938 article opined that it was “disturbing” to see kids listening to the radio for more than 2 hours a day.

— AUTOMOBILES: Early automobiles caused as much controversy then as driverless cars do today. It was common for people to yell “Get a horse” as the new fangled cars passed them by (both literally and figuratively).

— BICYCLES: Yes bicycles. First, bicycles were decried for allowing the youth to stray far from the farm. Second, bicycles were blamed for leading to the “evolution of a round-shouldered, hunched-back race” (1893).

— PHONOGRAPH: In 1890, The Philadelphia Board & Park commissioners “started a crusade against the phonograph.”

— KALEIDOSCOPES: Yes, kaleidoscopes! In the early 1800s kaleidoscopes were blamed for distracting people from the real world and its natural beauty.

— BOOKS: You read that right. Books. Novels were considered to be particularly abhorrent. In 1938, a newspaper ran an article with some top tips for stopping your kids from reading all the time.

Little men with little minds and little imaginations go through life in little ruts, smugly resisting all changes which would jar their little worlds. ~ Zig Ziglar

————–

[pullquote]This isn’t the first time Apple has changed the way we do things[/pullquote]

You know, this isn’t exactly the first time that Apple has changed the way we do things.

The Macintosh got us to use the mouse. And that wasn’t a given.

The Macintosh uses an experimental pointing device called a “mouse”. There is no evidence that people want to use these things. ~ John C. Dvorak, In a review of the Macintosh in The San Francisco Examiner (19 February 1984)

(emphasis added)

Remember the day-glow colors of the first iMacs?

images-173

Remember the iconic white earbuds of the iPod (just 15 years ago, this week).

images-179

Remember what it was like before the Smartphone and how quickly we adapted to having a Smartphone with us all the time?

screen-shot-2013-03-14-at-1-39-17-pm-png

You think going from the Smartphone User Interface to AirPod User Interface is going to be hard? Are you kidding me? This is going to be the easiest User Interface transition ever.

If you’re strong enough, there are no precedents. ~ F. Scott Fitzgerald

[pullquote]How hard will it be to go from using headphones with our smartphones to simply using headphones all by themselves?[/pullquote]

AirPods build upon already existing habits. We’re already talking into our phones and bluetooth devices. Who cares if we start talking into our AirPods instead? And we already use headphones with our Smartphones. How hard will it be to go from using headphones with our smartphones and smartwatches to simply using headphones all by themselves?

Good products help us do things. Great products change the things we do. Exceptional products change us. ~ Horace Dediu (@asymco) 9/4/16

Siri Sucks

If you want to doubt Apple’s ability to create a truly meaningful Voice-First User Interface, look no further than Siri. If Apple is going to rely on voice for input, and Artificial Intelligence for output, then Siri needs to be top-tier. Right now, not only isn’t Siri “good enough”, it’s just plain not good. True, sometimes Siri can be magical…but far more often it’s maniacal.

Some people think Siri is a joke. I disagree. There’s nothing funny about the way Siri fails to do what it’s supposed to be doing.

The good news is Apple is very well aware of the fact that Siri is moving from backstage to center stage. The bad news is that Apple has yet to prove that they have the ability to transition Siri from the role of a bit player to that of a lead actor.

If you’re an optimist, like me, one hopeful precedent is Apple Maps. They too were widely panned when they first appeared. But gradually — year after year after year after year — Apple improved them until, over time, they have became “good enough” (although the title “best” still resides with Google Maps).

Part 4: The Apple Way

Nilay Patel, and other critics, can’t understand why Apple is doing what it’s doing.

Invention requires a long-term willingness to be misunderstood. ~ Jeff Bezos

There’s nothing new in that. Apple has always been misunderstood.

Human salvation lies in the hands of the creatively maladjusted. ~ Dr. Martin Luther King, Jr.

Apple has always been willing to take chances.

Success is the child of audacity. ~ Benjamin Disraeli

Boldness has genius, power and magic in it. ~ Johann Wolfgang von Goethe

And they’ve always been mocked for doing so.

1

The price of originality is criticism. The value of originality is priceless. ~ Vala Afshar (@ValaAfshar) 9/27/16

But why take chances?
Why do things that you know are going to be heavily criticized?

Clear thinking requires courage rather than intelligence. ~ Thomas Szasz

Well, for one thing, that’s where the opportunity lies.

The biggest opportunities are going after complex solutions that incumbents trained everyone to think could never be made simple. ~ Aaron Levie (@levie)

For another, Apple knows that the real danger lies in NOT taking chances.

Don’t play for safety.  It’s the most dangerous thing in the world. ~ Hugh Walpole

Avoiding danger is no safer in the long run than outright exposure. The fearful are caught as often as the bold. ~ Helen Keller

If you risk nothing, then you risk everything. ~ Geena Davis

The trouble is, if you don’t risk anything, you risk even more. ~ Erica Jong

It is better to err on the side of daring than the side of caution. ~ Alvin Toffler

Nilay Patel doesn’t understand what Apple understands. You can make small incremental changes — baby steps, if you will — to improve an existing product. But designing a new User Interface is revolutionary and requires radical change.

A truly great design is innovative and revolutionary. It’s built on a fresh idea that breaks all previous rules and assumptions but is so elegant it appears simple and natural once it has been created. ~ David Ngo

You can’t get to a new User Interface by taking baby steps. You get there by making a leap.

[pullquote] The most dangerous strategy is to jump a chasm in two leaps[/pullquote]

The most dangerous strategy is to jump a chasm in two leaps. ~ Benjamin Disraeli

Apple is not going to sit around and wait for their competitors.

If you’re competitor-focused, you have to wait until there is a competitor doing something. Being customer-focused allows you to be more pioneering. ~ Jeff Bezos

The competitor to be feared is one who never bothers about you at all, but goes on making his own business better all the time. ~ Henry Ford

[pullquote]Apple is not going to sit around and wait for Nilay Patel’s permission[/pullquote]

And they’re not going to sit around and wait for Nilay Patel’s permission either, that’s for damn sure.

Standing still is the fastest way of moving backwards in a rapidly changing world. ~ Lauren Bacall

Apple thinks AirPods are going to be a significant part of their future.

The future lies in designing and selling computers that people don’t realize are computers at all. ~ Adam Osborne

So they’re moving toward that future today.

Most important, have the courage to follow your heart and intuition. ~ Tim Cook 10/5/16

Why is that so hard to understand?

Podcast: Microsoft Surface and Apple MacBook Events

In this week’s Tech.pinions podcast Ben Bajarin, Carolina Milanesi and Bob O’Donnell analyze the big events this week held by Microsoft and Apple, discussing their latest PC offerings and getting a glimpse into their future strategies.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

With Touch Bar, Apple Again puts Faith in Third-Party Developers

Apple this week introduced new MacBook Pros and, in addition to bright new screens, fast new processors, and–of course—ever thinner form factors, Apple introduced a new hardware feature called the Touch Bar. It’s a high-quality miniature screen that runs the length of the keyboard, replacing the old F Keys row above the numeric keys. In person, this display looks great and it has a unique coating that makes using it feel super smooth.

As you might expect, Apple’s Mac OS and first-party apps use Touch Bar right away but, if it is going to become a must-have feature worthy of driving Mac buyers to upgrade, third-party developers will also have to embrace it. At launch, Apple already had buy-in from big firms such as Microsoft and Adobe. But the real question is whether the rest of the developer community will follow suit and, if so, how soon?
Sticking to its Guns
I’ve lamented before that, after using a number of touch-enabled Windows notebooks, using a non-touch Mac notebook felt like a step backward. It’s easy to see Apple’s decision to put a small touch screen above the keyboard as a simple, stubborn unwillingness to bend to the larger trends in the PC industry just as it once resisted larger smartphone screens. To its credit, with the Touch Bar, Apple has put together a touch technology its executives clearly believe is a better option than a touch screen.

Apple has long suggested that reaching up to touch the screen of a Mac is unnatural and that it breaks the usage model of the notebook. In theory, I agree touching a notebook screen seems unnatural. But I also know, now that I’ve been doing it for a while, it feels pretty natural to me to reach up to touch the screen to scroll a Web page.

Keeping the Touch Bar on the horizontal axis means, as a user, I’m not reaching for the screen. But it also means I’m looking down from the screen toward the keys to find the specific, custom keys each application serves up on the Touch Bar. I suppose over time you could develop some muscle memory for unique Touch Bar keys you use often, but that seems unlikely.

After the keynote on Thursday, I participated in a deep-dive session and had the chance to spend some time with the new hardware. I can tell you this much: The Touch Bar is addictively enjoyable to use.

It works as you would expect for tasks such as scrolling through pictures and video (fast and fun), changing system settings (precise as physical buttons), and using the calculator (it’s the killer app, seriously, you heard it here first).

But where the Touch Bar really shows promise is with large, complicated apps such as Microsoft Word and Excel, and Adobe Photoshop. These apps tend to have tons of features that get lost in icon-dense ribbons or buried deep in drop-down menus. With Touch Bar, the developer can surface some of these features, making them visible and more easily accessible for the average user. Power users might scoff but, for many people, this level of increased visibility could lead to real productivity gains.

Apple tells me it is very easy for a developer to enable the Touch Bar for their apps and noted that partners appearing on stage this week did so in a very short amount of time. It will be interesting to see if other major Mac software developers do the same in the coming weeks. And perhaps more telling if smaller developers, with more constrained development time and budgets, decide such an update is worthwhile for their users.

Touch ID Impact or 3D Touch Impact?
What’s not clear to me yet is whether the Touch Bar is one of those new features that will instantly resonate with customers and become a part of their daily lives or if it is merely an interesting technology that makes for a great demo but never really takes off in common use. A good example of the first was Apple’s introduction of Touch ID on the iPhone (and available now on the MacBook Pros with Touch Bar). That technology fundamentally changed the way the vast majority of iPhone users interact with their phone every single time they pick it up. An example of the latter is 3D Touch, an interesting technology I often forget is on my phone unless I accidently trigger it. 3D Touch may eventually become an integral part of the iPhone interface but, right now, it doesn’t feel like most people see it that way. It’s too soon to tell which way the Touch Bar will go.

One thing is clear: Apple sees it as a feature some customers will pay to have, as the 13-inch MacBook Pro with Touch Bar carries a roughly $300 premium over a comparable model without it (Note: The Touch Bar model also has a better CPU). After a brief hands on, the Touch Bar feels to me like an important refinement to a tried-and-true interface. I’m not sure yet if it’s better or worse than a touch screen, but I look forward to testing the hardware in the coming weeks to see how it impacts my usage. And I’ll be watching closely to see which developers embrace the technology and which do not.

Microsoft’s Two-Pronged Creativity Push

Microsoft has been refining its identity and strategy since Satya Nadella took over as CEO, and much of that focus and strategy has been centered on productivity and helping people get things done. That vision has married well with Microsoft’s renewed emphasis on business products and services but it has also reinforced the sense that Microsoft doesn’t get consumers or, at least, the consumer halves of its users’ lives. Microsoft has needed a rallying point for a set of efforts around consumer use cases, and it appears to have decided on creativity as the catchphrase for this push, as demonstrated at Wednesday’s event in New York City.

A two-pronged strategy, not a single device

That creativity push has two main strands to it and it’s important to look at the totality of what Microsoft announced to see the full picture. I’ve seen a lot of people talking about the hardware side of the announcements – the Surface Studio – as evidence this effort will be marginal but I think that misses the point. This new creativity emphasis includes both new creative tools within existing products like Windows and Office and new hardware in the form of the Surface Studio and the existing Surface product line. Microsoft seems determined to challenge Apple’s historic edge among professional creatives but it is also making a play for the creative element within a broad base of consumers and professionals.

Yes, the Studio is a high-end PC that’s going to be out of reach for the vast majority of consumers, most of whom will be left with traditional PCs that don’t have all the capabilities Microsoft showed off today. But the role of the Surface Studio is, arguably, to put a stake in the ground that says Microsoft is serious about serving the creative community, not to address the needs of its mainstream audience. However, by unveiling premium hardware that’s both beautiful and innovative, Microsoft is sending a broader message about its commitment to this space and to creativity more broadly.

Where most ordinary consumers will encounter Microsoft’s creativity push first is not in hardware but in software. Microsoft’s new Paint 3D app what looks like a GarageBand competitor called Groove Music Maker. Other enhancements in the new version of Windows 10 are more mainstream attempts to establish Microsoft as a creativity brand. Whereas the Surface Studio is a niche product, Microsoft now has over 400 million users of Windows 10 who will get the free Creators Update in the spring (or earlier, if they’re on the Windows Insider program). That’s a more mass-market strategy around creativity and has to be seen as part of the same concerted push to demonstrate leadership in this area.

Changing perceptions takes time

Though Wednesday’s announcements are a good start, it takes a long time to change deeply entrenched perceptions. Microsoft has its work cut out in trying to convince potential customers its products are more than just the workhorses they’ve always been for many. Workflows and cultures in many creative companies are built around Apple products and that won’t change overnight. However, Microsoft’s timing for these new products is great, coming at a time when Apple has been accused of neglecting its creative community. Apple, of course, has its own event on Thursday and will get an opportunity to make its case for its own vision of the future of computing.

It’s also easy to overestimate the role creative professionals play for Apple – though its Mac base was once heavily skewed towards these users, it’s long since broadened its appeal well beyond those users and well into the mainstream. Though losing creative professionals as a constituency might be painful for some at Apple, its mainstream appeal is what matters. It needs to shore that up with its announcements this week and beyond. In addition, Microsoft still largely relies on third-party developers to meet the needs of professional creators, whereas Apple does much more to meet those needs directly through products like Final Cut Pro and its add-ons, Logic Pro, and so on. Microsoft has Office for more generic work tasks but still doesn’t have a direct presence in the more creative fields specifically.

Lastly, the 3D enhancements to Paint and other apps were a mix of interesting and gimmicky. I’m not sure how many people are actually interested in creating 3D scenes of their trips to the beach but 3D animations in PowerPoint could enrich presentations in useful ways. The 3D push feels as much about finding a useful way to tie HoloLens into the consumer story as it is about creativity. Microsoft has put a lot of its next generation interface efforts into augmented reality with HoloLens but it has ended up with a product that’s far from mass market in nature. Its VR announcements on Wednesday are a concession to the reality that VR is where today’s mass market opportunities are, though Microsoft’s PC-centric push with $300-plus headsets will have a smaller addressable market than existing smartphone-centric solutions selling for around $100.

Cue Apple

Of course, now we all wait and see what Apple has in store Thursday. It obviously won’t respond directly to announcements made the day before but, as I wrote on Tuesday, it does need to demonstrate whether the next round of competition between Windows laptops and MacBooks will be defined by hardware performance advantages or by philosophical differences. Apple has been using the iPad lineup to meet many of the needs Microsoft has in its more PC-centric Surface products for Windows users, so it will be interesting how Apple sets its new MacBooks apart. Not only creative professionals but mainstream users have been waiting for updates to the MacBook Pro and either an update to or replacement for the MacBook Air. Apple’s event on Thursday will need to give them compelling reasons to upgrade to new devices rather than jumping ship to Windows.

Creativity Is the New Productivity

Every three months, we are reminded of the doom and gloom of the PC market. As PC vendors report their earnings and various bean counters – I used to be one – publish their market share numbers, we are reminded replacement cycles remain long and consumers do not seem interested in upgrading.

I’ve discussed before what I see as a crucial step in breaking this process: stop talking about PC replacement and start talking about what the new PCs have to offer and the role they play in your portfolio of devices. This week, with both Microsoft and Apple holding their device events, I hope this is exactly what we are going to see.

If we look at the invites the two companies have sent out, there is not much to go on. Microsoft is a little more generous in giving us a taste of what the announcement will be. We assume it is a device event because of the time of the year, although the invite itself says, “What’s next for Windows 10”. We are also invited to “Imagine what you’ll do”, which is as fluffy as an invitation can be to open our minds to new possibilities. Yusuf Medhi, VP of marketing of the Windows and Devices group, urges us to “get ready to get creative”. So it would be safe to guess it is about a device that is going to focus on creativity.

Apple’s invite was even more cryptic, saying. “Hello again” — which many connected to the “Hello” used for the Mac launch in the 80s. Rumors have it we will see three different devices: a 13” MacBook and a 13” and 15” MacBook Pro. Aside from the device specs, what will be interesting is how Apple positions these new devices against the iPad Pro. As many of you will remember when the iPad Pro was launched, Apple had an ad that asked, “What if your PC was an iPad Pro?” Of course, while their focus was on the competing Windows devices, the question raised doubts in certain minds on what the role of the Mac will be going forward vs the iPad Pro.

Mobility Changed the Meaning of Productivity

I think it is important to look at how our workflow has changed over the past few years to understand what role different devices could play in our life.

According to the dictionary, productivity is a measure of the efficiency of a person, machine, factory, system, etc., in converting inputs into useful outputs. Productivity is computed by dividing average output per period by the total costs incurred or resources (capital, energy, material, personnel) consumed in that period. When we moved from analog to digital productivity and creativity were very much intertwined as thanks to computers we were able to do things we had not been able to do before and in much less time.

I strongly believe that mobility change the meaning of productivity.

The “connected anytime, anywhere” world we live in has put more emphasis on the speed of that output rather than the complexity or quality of it. Gone are the days when people put their “out of office” hat on and are not available while they are out. The only time I put my out of office hat on is when I am traveling in different time zones and it is really more to apologize in advance for the delay in getting back to people than warn them I will not be available. Whether through social media or email, mobility made it all about the timeliness of the information we create and exchange. Because of this, we have become accustomed to triaging our work on the go with devices that are very light weight, have smaller screens, and have, in more cases than not, built-in connectivity. While we might not be creating a full presentation on the go or might not be writing the next New York Times bestseller, we see what we accomplish in our day on the go as being productive. These devices have allowed for what used to be down time during travel or in between meetings to be an opportunity to keep up with what is going on at the office when we are not physically there. It has also created the opportunity to turn us all into control freak workaholics but that is a different story.

Work and Play is More Fluid

The other side of the coin for this always-on world is work and play are more blended. Both with content and tools, we cross boundaries all the time. Consumerization of IT, bring your own device, bring your own app, the cloud, and real–time collaboration are some of the result or the spark of such blending. This means when we look at our next PC/Mac to buy, we want to see familiar technologies we have come to love and depend on like touch, voice, high-resolution screens, fast processors, and even pen support. Having all the apps we use every day on our PC/Mac would also be great but, given that our phones are never far away from us, this is not necessarily a must.

Creativity Is All About Thinking Outside the Box

So, if productivity has more to do with our response time, making highly mobile devices more suited for it, what is creativity and what kind of devices does it require?

According to the dictionary, creativity is the mental characteristic that allows a person to think outside of the box, which results in innovative or different approaches to a particular task.

First, let me say I realize not all white collar jobs are created equal and require the same skills and tools. I also realize there are many verticals, from health to education that, depending on where you look, are either stuck in an analog world or are full on into a digital one.

If I consider how my job has changed over time, I cannot help but see the impact of creativity in what I do. I see my job as delivering insights and advice to my clients. That has not changed since I started over 16 years ago. What has changed is what I deliver and how. I used to engage in three main ways: writing reports, delivering presentations and taking calls. Today, while I continue to engage that way with clients, it is not the only way I deliver value to them. Social media, interactive webinars, podcasts, and blog posts are added to my output list. Having the ability to manipulate charts as I present using Pixxa, or to draw a mind map on my iPad Pro or Surface during a meeting, I am embracing new technologies and devices to make my workflow more effective. When I am not on the go, I appreciate a device that gives me a non-compromised experience. A device that allows me to be immersed in what I am doing whether that is combing through thousands of data points, following a tweetstorm, watching a live stream of an event or recording my weekly podcast or experiencing a mixed reality environment.

This is good news for PC vendors because, if I am not alone, it could mean consumers shopping for PCs will be looking to invest more money for that non-compromised experience. It is also good news for platform owners who will have another platform for consumers to engage with. This last point is, of course, particularly important for Microsoft who needs to continue to build engagement with Windows 10.

In order for this to happen, however, we need to see more than just a beautiful design, which has been the focus for many vendors. Looking like a MacBook Air is not going to be enough for users who really want to have a rich experience. The focus should be on pushing the boundaries of how hardware, software, and apps all come together. While this gives an advantage to the Microsoft Surface family, and Apple’s Macs,  over other manufacturers who do not control their entire destiny, I strongly believe this will be a win for the entire industry but most of all for the consumers.

The Indefatigable PC

By all rights, it should be dead by now. I mean, really. A market based on a tech product that first came to market over 35 years go?

And yet, here we stand in the waning days of October 2016 and the biggest news expected to come out of the tech industry this week are PC announcements from two of the largest companies in the world: Apple and Microsoft. It’s like we’re in some kind of a weird time warp. (Of course, the Cubs are poised to win their first World Series in over 100 years, so who knows?)

The development must be particularly surprising to those who bought into the whole “PC is dead” school of thought. According to the proselytizers of this movement, tablets should have clearly taken over the world by now. But that sure didn’t happen. While PC shipments have certainly taken their lumps, tablets never reached anything close to PCs from a shipments perspective. In fact, tablet shipments have now been declining for over 3 years.

After tablets, smartwatches were supposed to be the next generation personal computing device. Recent shipment data from IDC, however, suggests that smartwatches are in for an even worse fate than tablets. A little more than a year-and-a-half after being widely introduced to the market, smartwatch shipments are tanking. Not exactly a good sign for what was supposed to be the “next big thing.”

Of course, PCs continue to face their challenges as well, particularly consumer PCs. After peaking in Q4 of 2011, worldwide PC shipments have been on a slow steady decline ever since. Interestingly, however, US PC shipments have actually turned around recently and are now on a modestly increasing growth curve.

The reason for this is that PCs have continued to prove their usefulness and value to a wide range of people, especially in business environments. PCs are certainly not the only computing device that people are using anymore, but for many, PCs remain the go-to productivity device and for others, they still play an important role.

To put it simply, there’s just something to be said for the large-screen computing experience that only PCs can truly provide. More importantly, it’s not clear to me that there’s anything poised to truly replace that experience in the near term.

Another big reason for the PC’s longevity is that it has been on a path of constant and relatively consistent evolution since its earliest days. Driven in part by the semiconductor manufacturing advances enabled by Moore’s Law, a great deal of credit also needs to be given to chip designers at Intel, AMD and nVidia, among others, who have created incredibly powerful devices. Similarly, OS and application software advances by Apple, Microsoft and many others have created environments that over a billion people are able to use to work, play and communicate with on a daily basis.[pullquote]PCs have actually never been stronger or more attractive tech devices—it’s more like a personal computer renaissance than a personal computer extinction. “[/pullquote]

There have also been impressive improvements in the physical designs of PCs. After a few false starts at delivering thin-and-light notebooks, for example, the super-slim ultrabook offerings from the likes of Dell (XPS13), HP (Spectre X360) and Lenovo (ThinkPad X1) have caught up to and arguably even surpassed Apple’s still-impressive MacBook Air. At the same time, to the surprise of many, Microsoft’s Surface has successfully spawned a whole new array of 2-in-1 and convertible PC designs that has brought new life to the PC market as well. It’s easy to take for granted now, but you can finally get the combination of performance, weight, size and battery life that many have always wanted in a PC.

Frankly, PCs have actually never been stronger or more attractive tech devices—it’s more like a personal computer renaissance than a personal computer extinction. The fact that we’ll likely be talking about the latest additions to this market later this week says a great deal about the role that PCs still have to play.

Securing our Internet of Things

If we learned one thing from the DDoS attack that took down many websites on Friday, it is we still have a long way to go when it comes to securing all the connected things in our lives. This particular attack used insecure devices like IP-connected cameras with weak security as tools to perform the attack. This type of focused attack is one way to interrupt internet service and could be used to take down, not just websites, but our payments grid or any number of things which could wreak significant havoc on our society. Let’s hope this attack serves as a wake-up call for the industry.

This whole ordeal caused me to think about the range of connected devices I have in my house and wondering about their security. Most of the devices I have, I have personally secured and I don’t have my DVR (one of the types of IoT devices used in this attack) connected to the Internet. In most cases, I know the security I have in place for my IoT devices but one in particular I had to look into more deeply–my solar panels. Our solar array is connected to our network so I can monitor how it is performing. We secured the log-in with a strong password but they can also be remotely accessed in case we need support from the company we purchased the system from. It was this remote access that I was not aware of the security measures in place.

In many cases, I was fairly aware of the security measures. I’m guessing most consumers are not. The challenge the industry has is to bear the burden of taking the necessary steps to provide increased security and encryption of these devices because the reality is many consumers will not know to take additional measures themselves.

Apple outlines the security measures in place for Homekit devices and this is a solid initiative to provide a framework for security. However, many of the companies selling connected refrigerators, thermostats, IP cameras, coffee pots, etc., are likely not to use just Homekit but other emerging standards as well. The burden of responsibility is on companies providing these consumer products to enforce either stronger passwords or two-factor authentication (or both) in order to make sure consumers are taking the nececcesary steps to secure their IoT devices so they can’t be used for malicious cyber attacks.

Interestingly, in this case, it wasn’t necessarily the fault of the brand selling the IoT products but the component company behind them. Hangzhou Xiongmai Technology admits its products were used in the attack as a malicous worm exposed the weakness in the default security in many of the products their components are found in. The company has said they have sinced patched this vulnerability and consumers should update their firmware if they haven’t already.

My concern with the state of the market right now is the companies rushing to capture a part of the growing connected and smart home market are not fully thinking through the implications of dozens of connected devices in consumers’ homes they may not secure correctly. Consumers, although they will say they want and understand the value of security, rarely take the steps to ensure their own security and privacy. This is why it is so important for companies to bear the burden of this for consumers where they can or making sure they help consumers step up the level of security around their connected products.

Podcast: Dell-EMC World, Le Eco Launch, Apple Car

In this week’s Tech.pinions podcast Jan Dawson, Carolina Milanesi and Bob O’Donnell discuss the recent Dell/EMC World event, debate the potential impact of new Chinese content/device maker LeEco, and analyze recent news around Apple’s rumored car efforts.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Digital Assistants: Me Smart or World Smart?

Digital assistants are hot, that much is clear. What is less clear is what role they will play in our life. More importantly, how much we will let them be part of our life. I personally believe the extent to which we will feel comfortable including them in our lives will depend on how much we can trust they know us and how human-like our interaction with them can be.

I have spoken in the past about Jarvis and Mary Poppins as the two types of assistants I see vendors currently focusing on. One being very personal and one being more shared across family members. Both these roles imply quite a deep knowledge and understanding of what I as an individual and we as a family unit do, how we function, what we like and dislike. While asking our assistant for weather updates, fun facts of the day, alarm settings and correct spelling is quite fun, the novelty quickly wears off and the perceived return on investment is not actually life changing. Context and personal knowledge is what will deepen our relationship: warning you it will rain on Sunday when your assistant knows you are going to a BBQ, setting the appropriate timer for the cupcakes you just put in the oven, reminding you of what happened to you personally on a day 4 years ago — this is the kind of intelligence that will leave users wanting more and thinking about the assistant like a true genie in a bottle. Think about it this way: we all turn to Google to ask questions, so often that some even wonder if we might no longer be trying to remember because we know we can Google everything any time. Yet, I would argue, Google search does not evoke any emotional connection. Facebook memories, however, by serving you posts from your past that happened on a specific day, makes you relieve that experience – although more intelligence could be applied here when it comes to sad events in someone’s life – and really playing on people’s emotions while subconsciously making you appreciate Facebook and making you want to invest more time posting.

It’s all about me
One of the most annoying things for me when I start using a new assistant is to have to train it to say my name correctly — “Caroleena” not “Carolina”, like the US states. If you know me personally, you might have heard me say that, if I do not correct the way you say my name, it is because I do not expect to see you again. However, if we work together or I see you socially on a regular basis, I will correct your pronunciation. Why? Because if you keep on calling me Carolina I feel you do not really know me and, more importantly, you are not actually interested in knowing me. Right now, there is not much our assistants know about us that they are actively using to serve us. They might know where we live, where we work, might recognize my husband and daughter but there is little to no pro-activity in using that information in the exchanges we might have. Some have security concerns about just how much information they share with their assistant but the reality is you are not likely to share more than what you are already doing in various social media posts, online calendars and emails. The key difference, however, is what you share with your assistant will make a difference to you: reminders, alerts, suggestions, recommendations. Not everything can be learned automatically, though. So, at least initially, you will have to enter information, which is not very different from what most of us do today with our calendars, either digital or the old fashion one on the kitchen wall.

Better than us

Aside from being about me, I want my assistant to interact with me in a natural way. Last Friday, I had the pleasure of being a guest on Science Friday to discuss Digital Assistants together with Justine Cassell, one of the researchers at Carnegie Mellon behind SARA, the Socially Aware Robot Assistant. It was fascinating to hear that SARA spent 60 hours watching human interactions in a team environment and how those interactions changed over time — not necessarily for the better. One observation was as the humans became more familiar with each other over time, praise went down.

We have seen Microsoft’s bot Tay fail miserably because it became too human. Tay was modeled on a teenage girl, used millennial slang, knew about pop stars and TV shows and was quite self-aware, asking if she was ‘creepy’ or ‘super weird’. Sadly, she quickly started to be inappropriate, possibly succeeding in being a teenager but failing to be the marketing tool Microsoft wanted her to be. That was an extreme case, but it really showed the dangers of having bots and assistants learn from human interactions. At the time, Ina Fried wrote an essay I thought was exactly on point in how bots need to be better than humans, not equal to them.

The “I do not talk to technology” hurdle

Most consumers are not comfortable talking to technology, especially in public. This feeling is not just driven by talking into a phone or a PC – headphones would easily solve that problem – but it is more about having to learn to speak in a certain way in order to get a response. As humans, we do not talk to each person we encounter in a radically different way. We might be more or less polite or more or less formal, depending on the relationship we have, but the core of our question stays the same. We do not start each question by reengaging the interlocutor by saying their name like we have to with the assistants most of the time. We also do not always say everything we should or mean what we say. With current assistants, there is no real margin of error on the human side. We must be precise and offer all the information needed in order for the assistant to serve us. This is just too much work to put in especially as most people see assistants as nothing more than a voice search.

The combination of knowing us personally and letting us speak naturally will be key in growing our interactions and, ultimately, our dependence on our assistant. We might use different generalists but we will likely want one optimized assistant. It is interesting that this week, Russ Salakhutdinov, a computer science professor at Carnegie Mellon University, announced on Twitter he will be joining Apple as their director of AI research. I trust Apple will know its users the most because of the trust they have in the brand and the level of engagement they have with the products and the ecosystem. What Apple needs to focus on now is making Siri more conversational and proactive. This, of course, will take time. While we wait, we should be able to see continued improvements in the smartness of the device and apps we are using every day — from the camera, to Photos, to our calendar. Let’s appreciate the brains more as we wait for the pretty voice to become more and more part of our life.

LeEco’s Big Vision for the US Needs Fleshing Out

Chinese consumer technology and content company LeEco held its US coming-out party on Wednesday, introducing several new products to the US market for the first time and articulating its strategy for this most challenging of markets. It’s certainly not the first Chinese company to attempt to break into the US but, for most of the others, it’s either been a dismal failure or a long, hard slog with only modest success to show for it. What we got on Wednesday from LeEco was long on vision and buzzwords but with relatively little meat on the bones from the perspective of execution.

From LeTV to LeEco

LeEco has a complex company structure and an interesting history. It was originally LeTV, an online streaming service in China but changed its name to LeEco last year as it began to build what it describes as an ecosystem bringing together hardware, software, and services. However, where others pushing a similar vision tend to put hardware at the center of the vision, LeEco puts content first. That’s been the core of its strategy in China, building loyalty to its content services and then parlaying that into a presence in devices. It looks likely to be the thing that will set it apart the most in the US, too. No Chinese company coming to the US has yet tried a content-first strategy and the reasons are obvious: the content that works in China is very different from what will attract consumers here in the US.

To that end, LeEco has apparently signed lots of partner agreements with major Western content providers – a slide flashed up briefly at the event had names like Magnolia Pictures, MGM Studios, Showtime and others on it, while Lionsgate and Vice Media executives briefly spoke on stage. But it’s far from clear yet how this content will actually show up on or add value to LeEco’s devices. There will be an EcoPass subscription at some point but neither pricing nor all the content included have been announced yet (there will be a follow-up event on November 2nd to announce an additional content partner beyond the Fandor movie streaming service already announced). Two other video apps – one referred to inconsistently as either just “Le” or LeApp and the other as either Live or LeLive – were also discussed and it appears these are bundled into the devices and will offer additional content, though it’s not clear yet exactly what that content will be.

Given how critical content is to the LeEco value proposition, this was an odd omission. It suggests that, even though LeEco’s vision may be grandiose, its current ability to execute on that vision is a little pedestrian in comparison. That’s a shame because, although the value proposition is a challenging one, it is at least unique and could be effective if delivered in the right way.

Branding and marketing

Aside from the unfinished content story, LeEco’s other big challenge will be branding and marketing. Today, the brand is entirely unfamiliar to US customers, although the company did acquire TV maker Vizio recently. Unless that changes, none of the ecosystem or other benefits the company talked up on Wednesday will make any difference, because no one will ever know about them. LeEco talked up the economic benefits of its direct sales model (its LeMall website is a sort of single-brand Amazon) in terms of cutting out middlemen and reducing marketing and branding spend but the big disadvantage of going online-only is customers won’t encounter the LeEco brand in familiar stores. Vizio TVs will presumably continue to be offered through third-party distribution but it’s less clear that LeEco-branded devices will be.

The other major Chinese consumer tech companies have used both wireless carrier relationships and sponsorships of sports teams and events to gradually familiarize US consumers with their brands but it’s unclear whether LeEco has any similar plans. Starting from the ground up without either third party distribution or a massive brand awareness campaign seems like a recipe for failure. It doesn’t help that, as executives joked on stage, the “Le-” prefix conjures up misleading French associations, as well as being plain awkward when it’s so widely and inconsistently used (some sub-brands use the Le prefix joined to a word, like LeEco, while others use it as a separate prefix, as in the Le Pro3 phone, while the Eco element is used separately in lower case names for product lines like “ecophones” and “ecotvs”).

Though the ecosystem itself is more fully fleshed out than is typical from companies just entering the market, it’s still not clear what the on-ramp for consumers will be. It appears the LeMall e-commerce site is one such entry point but when US consumers have never used any LeEco products and that’s all the site appears to sell today, it’s not obvious why people would go there in the first place rather than to a more familiar site like Amazon (or even BestBuy.com or Walmart.com). When it comes to content, it appears the various offerings are all tied to device purchases, which makes that a tough entry point as well. It might be a lead generator were it available as a standalone app or a service users could try on their existing devices.

A fascinating new player

As you can tell from everything above, I’m rather skeptical LeEco can make a big dent in the US with what we saw outlined on Wednesday. However, what’s clear is the company is committed to the US in a big way – it’s already hired hundreds of employees and apparently intends to hire thousands more and the big flashy launch event was another sign it’s serious about the US market. The content partnerships are also impressive for a company that’s never done business in the US in a meaningful way. The focus on creating an ecosystem rather than a pure-play low-cost consumer electronics play is perhaps the most unique aspect of the LeEco proposition. In short, there’s lots here that makes LeEco a very interesting player to watch – certainly the most interesting to enter the US market for some time. While I’m bearish for now, much of what I’ve written above will be subject to revision as LeEco fleshes out its strategy, perhaps addressing some of my concerns as it goes. For now though, it’s certainly worth watching as the company begins to execute on its vision for success in the US.

Samsung’s Challenges Reveal Need for Greater Vendor Diversity

Much has been written about how a debacle like the Note7 happened and what Samsung might do to repair its reputation. Samsung’s challenges also shed light on a unique fact about the U.S. smartphone market: the high degree of vendor concentration. Industry research houses show Apple and Samsung represent nearly 75% of smartphone share in the US (Apple about 45%, Samsung 29%) and about 90% of the profits. The remaining share is spread among several vendors, with LG at nearly 10% and Motorola at about half that. Concentration has been creeping up in recent years. Four years ago, the combined Apple/Samsung share was just under 60%.

The picture is quite different in other parts of the world. Globally, as of Q2 2016, Samsung was the leading vendor at 23%, Apple had a 12% share, Huawei had 10%, and all others had 55% combined, according to IDC. The vendor share picture varies quite dramatically by region and even by country. Chinese vendors Huawei and ZTE have made significant progress in Europe, having captured some 25%+ share between them in certain countries. China, the world’s largest handset market, is a veritable vendor free-for-all. Nowhere is the market as concentrated as in the US.

Why is this the case? Well, one factor is our iOS-centricity: Apple’s share in North America is some 2x that of just about anywhere else. We also like our flagship devices. Average selling prices here are among the highest in the world, and US consumers show a particular penchant for the ‘latest, greatest’ iPhone or Galaxy. Another factor is the strong role the carriers play in the US market. Many of their pricing promotions and ad campaigns are centered around iconic iPhone and Galaxy launches.

This fall’s handset developments have revealed why this might be a problem. First, Apple introduced the iPhone 7 which, although a terrific device, received mixed reviews. I am sensing a bit of Apple ennui, particularly among younger users. Sales have been pretty good but replacement cycles are creeping up. Then there’s the Note7 debacle. It is hard at this point to gauge the long-term damage to Samsung’s reputation and whether it will significantly affect the company’s market share. But this has been both difficult and costly for Samsung’s major customers in the US, the wireless carriers, who play an outsized role in distribution compared to operators in other countries. We learned last week that Samsung is offering $100 vouchers to consumers, among other measures, but little has been said about what Samsung is doing to repair its reputation with the operators.

I believe US operators and consumers are left vulnerable by this concentrated market. First, Apple has never been all that operator-friendly — in fact, it continues to do things in the crosshairs of the operators, such as last year’s move into equipment financing. Second, what if the next iPhone is a dud or gets delayed by a few months? Operators have become pretty addicted to that clockwork September iPhone launch to meet their 4Q numbers. What if Samsung’s supply chain and QA problems are more far-reaching? What are the fallback options?

The challenge other handset OEMs have had in capturing share in this market over the past several years is a bit baffling. LG, Motorola, and HTC, among others, make excellent devices. In segments of the market where price is more of a factor, such as prepaid and MVNOs, their combined share is proportionately higher. But they can’t compete with the gargantuan ad budgets of Apple and Samsung and the carriers just haven’t given them all that much love.

I’m surprised the operators haven’t pushed harder. Why are they leaving themselves so vulnerable to an Apple or Samsung hiccup? Why haven’t they put more pressure on pricing? Why aren’t they exerting more influence on the phone’s user experience? With a leveling off of smartphone growth and longer replacement cycles, I am a bit surprised at operators’ order-takery mentality on devices.

Perhaps this is why Google is taking a renewed and more vigorous crack at the handset business with the recently announced Pixel phones. It realizes the market is increasingly tilting toward ecosystems and software—areas where it can exert an influence as long as the hardware is good (which seems to be the ante needed to play in developed country markets). Consumers buy iPhones, in large part, because of the Apple ecosystem. There is no single torchbearer for the Android ecosystem or even a device that maximizes Android’s potential to deliver a fantastic user experience. So, Google is thinking that embedding Google Assistant into the Pixel, plus the ability for Pixel to integrate with other announced Google hardware, such as Home, Hub and even the fledgling Project Fi service, could be part of a next-generation ecosystem play.

Why would vendor diversity be good? As protection in case of an Apple or Samsung hiccup (or worse); a hedge against said vendors’ occasional arrogance; a lever on inflated prices; and as a spur for innovation. This would also give the operators more skin in the game—a position they had not so many years ago.

Can IT Survive?

If you’ve ever worked at a business with at least 20 employees, you’ve undoubtedly run into “them”—the oft-dreaded, generally misunderstood, secretly sneered at (though sometimes revered) IT department. The goal of Information Technology (IT) professionals, of course, is to provide companies and their employees with the technical tools they need to not only get their jobs done, but to do so in an increasingly fast, flexible manner.

Frankly, it’s a tough, and often times thankless job. If your computer stops working, the network goes down, or some aspect of the company web site stops functioning, IT gets the brunt of the frustration that inevitably occurs. Beyond these day-to-day issues, however, IT is also tasked with driving changes to the infrastructure that underlie today’s modern businesses.

For that reason, IT has long been considered a strategic asset to most organizations. In fact, this central role has also turned the CIO—who typically runs IT—into a critical member of many business organizational structures.

But the situation for IT (and CIOs) appears to be changing—ironically because of some of the very same factors that led to its rise: most notably, the need for increased agility and flexibility.

The problem is, after several years (or more) of IT driven technological initiatives designed to improve reliability, increase efficiency, and reduce costs for key business processes, a large percentage of these companies have come to realize that the best solution is to have someone else outside the company take over. From more traditional business process outsourcing, through the evolution of nearly everything “as a service,” to the growth of public cloud computing resources, we’re witnessing the trickle of projects leaving the four walls of an organization grow into a fast-moving stream. As a result, IT departments are often doing less of the technical work and more of the management. In the process, though, they’re moving from a strategic asset to a growing cost center.

The implications of this change are profound, not only for IT departments, but to the entire industry of companies who’ve built up businesses designed to cater to IT. All of a sudden, equipment suppliers have to think about very different types of customers, and IT departments have to start thinking about very different types of partners. Arguably, it’s also driving the kinds of consolidations and new partnerships between these suppliers that seem to be on the rise.[pullquote]All of a sudden, equipment suppliers have to think about very different types of customers, and IT departments have to start thinking about very different types of partners.”[/pullquote]

The causes for these kinds of changes are many. Fundamentally, the revolution in the technology side of the business computing world has been even more extensive over the last few years than many first realized. To put it another way, though we’ve been hearing about the impact of the cloud seemingly forever, it’s only now that we’re really starting to feel it in the business computing world.

Another cause is an interesting bifurcation in the challenges and complexities of the products and services that have traditionally sat under the watchful eye of the IT department. On the one hand, many previously complex technologies and systems that required specialized IT expertise have become easy enough for non-IT line of business leaders to purchase and successfully deploy. Converged and hyperconverged appliances, for example, have brought datacenter-grade compute, networking and storage capabilities into a single box that even moderately technically people can easily manage through a simple interface.

In addition, managed service providers, hosted data exchanges, public cloud providers and a host of other companies that didn’t even exist just a few years back are offering utility-like computing services that, again, are offering increasingly easy solutions for business departments and other non-technical divisions of a company to quickly and economically put into production. More importantly, they’re doing it at a significantly faster pace than what many overburdened and highly process-driven IT organizations can possibly achieve.

Some IT professionals are dubious (and highly concerned) about these type of rogue shadow IT initiatives, but they don’t appear to be slowing down. In fact, in the case of a hot new area like Enterprise IoT, research has shown that it’s often a branch of a company’s Operations department (sometimes even called OT, or Operations Technology) that’s driving the deployment of devices like smart gateways and other critical new IoT technologies—not the IT department.

At the other technological extreme, many companies are also finding that making the move to more cost-effective and more agile cloud-based solutions is actually proving to be much more technically complex and challenging than first thought. In fact, there’s recently been talk of a slowdown within some companies’ efforts to move more of their compute, software and services offerings to the cloud because of the lack of internal skill sets within IT to handle these new kinds of tasks. In addition, much of the most advanced computing work, in areas such as machine learning, AI and related areas, often requires access to specialized hardware and software that many companies don’t currently have.[pullquote]Many IT departments are finding themselves in an awkward position in the middle where the now-easier tasks no longer require their help, and the tougher tasks take a larger supply of employees with skill sets or resources they don’t currently have.”[/pullquote]

The result is that many IT departments are finding themselves in an awkward position in the middle where the now-easier tasks no longer require their help, and the tougher tasks take a larger supply of employees with skill sets or resources they don’t currently have. Ironically, the very technology that started to create new opportunities for IT professionals (and which many feared would take away more traditional jobs) is poised to now start taking back jobs from IT as well. Needless to say, it’s a tough spot to be in.

Despite these concerns, however, there is still clearly an important role for IT in businesses today—it’s just becoming much different than what it used to be. For CIOs and IT to succeed, it’s going to take a different way of thinking. For example, instead of evaluating products, it’s increasingly going to require evaluating and managing partners and services. Instead of sticking with slow, burdensome, “we’ll build it here” types of internal processes, it’s going to require a willingness to explore more external options.

The importance of technology in business will only continue to increase over time. As technological solutions become more ubiquitous, however, the concept of distributed responsibility for these solutions will likely become the new reality.

How does a Problem Like the Note 7 Happen?

Those involved in the design and manufacturing of hardware products understand that one of the most important phases of the process is testing. That’s the point when all of the assumptions that have been made need to be validated. The only way to do that is to build hundreds or thousands of units and subject them to a battery of tests. Even then, you might still find problems not anticipated once devices get into the hands of thousands of customers but the goal is to be sure they are relatively minor.

The basic tests conducted include subjecting the products to a wide range of temperatures, humidity and physical abuse, including shock and vibration. The goal is to insure the product performs the same before and after and that the product remains intact and safe. Other tests include real life user testing and measurements to insure the product complies with regulatory requirements.

The testing typically takes several months to perform properly by a large group of quality and manufacturing engineers. Companies have rooms full of test equipment, including large ovens, shake tables, and fixtures that exercise buttons and switches millions of times to simulate actual use.

Yet, in the case of the Samsung Note7, it is puzzling they claimed they were able to identify the problem with their initial shipment, fix it, test it, and ship a half-million replacement units in just two weeks. That just doesn’t compute and apparently, that suspicion was verified by the failures of the second batch of units.

So now, it’s quite possible the problem might have been caused by another component that interacts with the battery, rather than the battery itself.

Testing of smartphones is particularly important because batteries pack a huge amount of energy into a small volume. They contain circuitry to prevent a run away condition should the battery or charging circuitry fail or go out of spec. The batteries are custom made to fit into the allotted space. Often, several companies or divisions are involved: the company building the battery cells, the company packaging the battery and adding the circuitry and connector, and the company putting the battery into the phone. But here’s another opportunity for error. The company doing the assembly may have assumed the battery integrator has performed sufficient testing. I’ve often found communications and clear division of responsibility among companies are often a weak point.

Yet in spite of a product passing all of this testing and having a sound design, there’s another thing to be worried about. It’s how well the product is manufactured on the assembly line. Most lines rely on the use of many workers that perform the assembly operations and not on automated assembly using robotic equipment. Each operator has instructions and tools to do a job that varies from attaching a circuit board assembly to the chassis, positioning and screwing the display in, or soldering a large component in place.

But it’s not uncommon for an operator to make a mistake: not tightening a screw sufficiently or shorting out a circuit. To minimize this, other operators are interspersed in the assembly line to test the partial assemblies, and then the completed product will go through some functional tests to insure it’s working.

But mistakes do happen. One electronic product I was involved in had a screw that was not tightened sufficiently. With little effort, it came loose and rattled around inside the product. That could be catastrophic because the metal screw could short out a battery or blow a circuit. In this instance, the line was building two thousand units a day on two 8-hour shifts and, by the time the problem was discovered, 8,000 units were effected. It was traced to one operator on one shift that failed to tighten the screw, even though she had a calibrated screwdriver that should have prevented this. So, one individual that might have been distracted or wasn’t sufficiently trained, caused a massive problem that required thousands of units to be opened, fixed and reassembled.

Imagine a factory building 100,000 units a day and you can see how a small error can have huge consequences. Much like the analogy of a butterfly flapping its wings and causing a hurricane halfway across the globe.

Podcast: PC Shipments, PlayStation VR

In this week’s Tech.pinions podcast Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the recent PC shipment and forecast numbers from IDC and Gartner, and analyze the impact of Sony’s PlayStation VR on the overall virtual reality device market.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Pixel and Surface: Comparing Google and Microsoft’s Hardware Game Plans

Google’s recent launch of the high-end Pixel and Pixel XL smartphones marks the company’s first self-branded entrance into the market, after dabbling via partner-branded Nexus products and a short run as the corporate owner of Motorola. Like Microsoft in 2012 and Apple decades before that, the company clearly understands that, to drive the best possible premium customer experience, it must own not just the software and services on the device but the device hardware itself. It’s instructive to compare Google’s plans for Pixel with what Microsoft has done with Surface and how both have modeled elements of their strategy on Apple.

Let’s start with a short history of Microsoft’s Surface. Microsoft launched the first two versions of the Surface, called the Surface and Surface with Windows RT in 2012, at the same time as it rolled out the ill-fated Windows 8 operating system. The product line suffered a rocky start as the company struggled to define its two different products, which resulted in a $990M loss. Undaunted, in late 2013, Microsoft launched the Surface 2 (the last product based upon Window RT) and Surface Pro 2 (Windows 8) and started to sharpen the products’ focus.

The Surface Pro 3 launched in 2014 to better reviews and improving sales. In 2015, Microsoft launched Windows 10, a demonstrably better operating system, which also helped Surface. Later that year, the company launched Surface 3, a lower-priced Windows 10 product, followed by Surface Pro 4 and the new, even pricier Surface Book. Microsoft will hold an event in late October where the company is expected to launch additional Surface-branded products.

Market Maturity
A fundamental difference between Pixel and Surface has to do with the maturity level of the market the device is entering. While Microsoft certainly didn’t invent what we now call the detachable market, it certainly put it on the map. Frustrated with its partners for not moving faster to embrace the form factor, Microsoft launched the first Surface into a market with total shipments in 2012 of 4.6M units. After a slow start, Microsoft moved into the number one spot, largely maintaining that position until Apple arrived with the iPad Pro. In 2015, the detachable market reached 16.6M units. It will double in size in 2016 and we’re forecasting strong growth for the next few years. Detachables is a high-growth area but, from a volume perspective, it’s quite small versus relevant adjacent categories.

Contrast this with the smartphone market, with massive shipment volumes but slowing growth. In 2014, the worldwide market saw growth of 10.4% year over year, with worldwide shipments of 1.4B units. In 2016, growth will slow to 1.6%. Moreover, much of the market’s growth is happening in the low-end of the market, primarily in emerging markets. There is still clearly a market for premium products, including high-end Android but, as a percentage of the market, that high-end space is shrinking and Apple and Samsung have a strong grip on many of these customers.

Channels
Obviously, Samsung’s current Note 7 recall woes present a golden opportunity for the Pixel in the premium space. However, it is unlikely Google is prepared to take advantage of that opportunity because its phone simply isn’t going to be available in all the channels where people traditionally obtain their smartphones. For example, in the United States, the only telco to offer the product is Verizon (although the Pixel will work on other networks). Best Buy will also carry the phone and, of course, you can buy it from Google’s online store. Google must be willing to expand the Pixel’s availability if it hopes to move the needle with this product. The company’s long-standing reluctance to embrace a wider channel strategy must evolve. This was one of the key elements of the Surface’s eventual success. In fact, in 2015 Microsoft went so far as to embrace Dell as a reseller of Surface hardware, bundled in a Dell-owned service contract. Apple has similarly embraced an ever-widening channel approach, especially when it comes to commercial buyers with high-profile deals with IBM, Cisco, and Deloitte.

The Partner Tightrope
One of the reasons Google is likely limiting the initial channels for Pixel is that, like Microsoft, it is walking a tightrope with a long list of hardware partners who use its operating system on their products. By targeting the high end, both companies argue they effectively limited their total available market, which gives them each cover with their partners. And it’s worth noting that, even when Microsoft has offered slightly lower priced (but still high end) Surface products, they haven’t done particularly well. While Microsoft isn’t rushing to put out less expensive products, it has certainly expanded its high-end product line with the Surface Book and I expect it to further expand later this month. I would expect Google to do the same over time. So, while both companies try to placate their partners, those partners should be wary just the same.

Customer Service
One of the key elements Microsoft copied from Apple and that differs for Google, is the vendor store. Apple Stores are not only a place to buy Apple hardware but a place to go if a customer needs some face-to-face help with their device. Microsoft’s rapid expansion of its stores allowed it to offer a similar level of hands-on customer service for Surface. Google doesn’t have physical stores of its own, so it is taking a different approach by offering 24/7 customer service through the phone. In either call or chat form, a human can help with issues and you can even share your screen. It’s worth noting Google isn’t the first to do this, as Amazon has offered this service on its tablets in the past. This is clearly not the same as being able to talk to someone in person but, for many people, this may prove to be a more preferred method.

The Long Game
The debate I’m having with my industry colleagues is about Google’s long game in hardware. Is the company simply trying to urge its partners to bring better devices to market or does it plan to own a chunk of the market? Microsoft argued it was doing the former with Surface but certainly didn’t bow out when partners such as HP, Lenovo, and Dell brought higher quality detachables to market. Whatever Google has planned, there’s no doubt the Android market is better with it in the mix. It will be interesting to see how the company’s products fare and how its partners respond in the coming months.

The De-Democratization of Online Publishing

One of the wonderful things about the rise of the web, twenty-something years ago, was the way in which it democratized publishing – suddenly, anyone with an idea could set up a website and make them available to anyone. Early on, publishing online required at least a rudimentary understanding of code. To be an online writer meant you also had to be a coder. But, services quickly emerged that created WYSIWYG editors for online publications, so literally anyone who had used a word processor could create online content.

Recently, however, we’ve seen the rise of proprietary formats like Google’s AMP, Facebook’s Instant Articles, and the Apple News Format, which threaten to de-democratize publishing on the web. To be clear, I’m not making a philosophical argument about the closed nature of these platforms but something much more practical: creating content for these formats reintroduces a coding requirement and online code is vastly more complicated today than it was in the mid-1990s.

A personal history

I first encountered the web when I entered university in 1994. It was a pretty primitive thing back then, with very limited ways to access it, and it was almost entirely text-based. But over the next four years, things moved forward rapidly, with additional web browsers improving the process of browsing the web and hosting and other online services making it easier for ordinary people like me to set up an online presence. By the time I graduated in 1997, not only was browsing the web a big part of my life but I had a website of my own. In order to build that website, I had to learn HTML which, at the time, was a very simple thing to grasp, at least at a basic level. But that coding requirement still prevented many people from creating an online presence.

Interestingly, I basically took a two-year break from the web between early 1998 and early 2000 while I was serving as a missionary in Asia. When I returned, the web had again moved on significantly. Blogger had launched in 1999 and was one of the first sites that enabled people to create their own websites without knowing anything about coding, web hosting, or any of the other more technical aspects that had previously characterized online publishing. Almost all of my online publishing since has been based on various blogging platforms and, for the last ten years, almost exclusively on self-hosted WordPress sites. Along the way, because I’ve always had something of an interest in coding, I’ve beefed up my understanding of HTML, grappled with CSS style sheets, and even done some messing around with PHP. But I’m always enormously grateful I don’t have to try to build sites that would perform well from the ground up – I’ve long since given up on that idea.

Enter AMP, Instant Articles, and Apple News

So much for my personal history. Since last summer, we’ve seen what I’d argue is the latest phase in this online publishing evolution. It involves the creation of a variety of proprietary formats for online publishing. Google has been spearheading the Accelerated Mobile Pages project (AMP), which launched officially almost a year ago. Facebook introduced its Instant Articles format last summer, with a similar objective of accelerating the delivery of articles on mobile devices. And Apple introduced News as part of iOS 9, opening it up to publishers over the summer and to most users in the Fall, albeit with different intentions.

Here’s what’s these platforms have in common, however: each uses proprietary formats to deliver articles to readers. Technically, these formats use standards-based elements – for example, AMP is a combination of custom HTML, custom JavaScript, and caching. But the point here is the outputs from traditional online publishing platforms aren’t compatible with any of these three formats. And, in order to publish to these formats directly, you need to know a lot more code than I ever did back in the mid-1990s before the first round of WYSIWYG tools for the web emerged.

As a solution, each of these platforms has provided tools intended to bridge the gap – all three, for example, have WordPress plugins to convert content to the appropriate formats. But a quick read of the reviews for the Facebook and AMP plugins tells you they don’t seem to be doing the job for many users. The Apple News plugin has a higher rating, but I know from my own experience that it’s problematic. Both Facebook and Apple also offer RSS tools to import existing content but there are limitations around both (Apple News doesn’t allow advertising in RSS-driven publications, while Facebook IA requires a custom RSS feed with IA-specific markup, which is again going to be beyond the ken of most non-coding publishers). Apple news offers a WYSIWYG tool, but it’s extremely basic (it doesn’t support embeds, block quotes, or even bullet points).

Why does all this matter? After all, no one is forcing anyone to use any of these formats – publishing to the open web is still possible. While that’s technically true, at least two of these formats – AMP and Instant Articles – are being favored by the two largest gatekeepers to online content: Google and Facebook. Google now favors AMP results in search, while Facebook does the same within its News Feed, though less explicitly (by favoring faster-loading pages, it gives IA content a leg up). Apple News is different – it’s a self-contained app and it’s basically irrelevant to you as a publisher unless your readers are using it. But if you do decide to use it, unless you publish in Apple News Format, you can’t monetize your content there and Apple is pushing the News app heavily to its users.

Turning back the clock

The upshot of all of this is, unless you’ve comfortable with fairly advanced web coding, or can pay someone who is, your online publication is likely to become a second-class citizen on each of these new platforms, if it has a presence there at all. And, as these platforms – especially AMP and Instant Articles – suck up an ever greater proportion of online content, that’s going to leave smaller publishers out in the cold. That, in turn, means we’re effectively turning back the clock to a pre-web world in which the only publishers that mattered were large publishers and it was all but impossible to be read if you didn’t work for one of them. That seems like an enormous shame and, from a practical standpoint, matters a lot more to me as an online writer than more philosophical debates about open vs. closed platforms.

CarPlay: The Best Incarnation of Apple’s Ecosystem

Apple is making a car. The code name is “Project Titan.” Apple brings back Bob Mansfield from retirement to lead the project. Apple lays off dozens of employees who were presumably working on the car project that was never confirmed. Apple might no longer be making a car. There! You are all caught up on the months of speculation around Apple and cars!

What I do know for sure is Apple is in my car today. A new car I have had now for about 10 days. A totally unnecessary purchase justified by the fact that my old car – a 2014 Suburban – was not technologically savvy enough. Now, I have the 2016 model and it does all sorts of things for me — warning me about lane departures, making my seat vibrate when a car or pedestrian is approaching me while reversing, and showing me the direction with a big red arrow on my screen. The most interesting part, however, is having CarPlay and Android Auto.

As I am currently using an iPhone 7 Plus, I tried out CarPlay and the results are quite interesting.

I have been using Google Maps pretty much since I got to the US four years ago. My old car had a navigation system but I hated it so I was using my phone with a Bluetooth connection. I had tried Apple Maps when it first came out but went back to Google and soon got used to certain features, such as the multi-lane turn as well as the exact timing of the command. I got comfortable with it and, aside from trying out HereWeGo and Waze, I have been pretty much happy with Google.

Having CarPlay made me rediscover Maps and features like where I parked my car, the suggested travel time to home or school or the office, suggestions based on routine or calendar information — all pleasant surprises that showed me what I had been missing out. It also showed me how, by fully embracing the ecosystem, you receive greater benefits. Having the direction clearly displayed on the large car screen was better and, while there is still a little bit of uneasiness about not using Google Maps, I have now switched over. Maps on Apple Watch just completes the car experience as the device gently taps you as you need to make the turn. It is probably the best example I have seen thus far of devices working together to deliver an enhanced experience vs. one device taking over the other.

Music has been in my car thanks to a subscription to Sirius XM but, at home, we also have an Apple Music subscription as well as Amazon Prime Music. With CarPlay, my music starts to play in the car as soon as the phone is connected and, despite my husband’s initial resistance, this past weekend, he was converted. He asked Siri to play Rancid and he was somewhat surprised when one of his favorite songs came on. My daughter is also happily making requests to Siri and everybody catching a ride is quite relieved not to be subjected to Kidz Bop Radio non-stop.

The best feature, however, is having Siri read and compose text messages for you. I know I can do that outside my car as well but I rarely do, because, well frankly, I don’t have to: typing serves me just fine. When I interact with Siri, the exchange feels very transactional, i ask a question I get an answer and that is it. The car is the perfect storm when it comes to getting you hooked on voice commands. You are not supposed to be texting and driving, the space is confined, and there is little background noise as the music is turned off when you speak (I have to admit a switch to turn off the kids would be nice too). Siri (she) gets commands and messages right 90% of the time which gets me to use her more. Interestingly, it is also the time where I have a more natural, more conversational, exchange with Siri:

Siri: There is a new message from XYZ would you like me to read it to you?
Me: Yes, please.
Siri: (reads message)
Siri: Would you like to respond
Me: Yes
Siri: Go ahead
Me: Yada Yada Yada
Siri: You are replying Yada Yada Yada, ready to send?
Me: Yes

At the end, you have a pretty satisfied feeling of having achieved what you wanted and not once moving your eyes from the road ahead.

Our Voice Assistant survey did show a preference for consumers to use their voice assistant in the car. Fifty-one percent of the US consumers we interviewed said they do, so I am clearly not alone. I would argue that interacting through car speakers vs the phone – assuming you are not holding the phone to your mouth which would not be hands-free – gives you higher fidelity and therefore a better, more engaging experience.

While we wait for autonomous cars (maybe even one by Apple) to take over and leave us free to either work or play while we go from point A and B, it is understandable that CarPlay stays limited to functions that complement your driving but do not interfere with your concentration. That said, I think there is a lot of room for Apple to deliver a smarter experience in the car if it accesses more information from the car and the user. Suggesting a gas station when the gas indicator goes below a certain point, suggesting a place to park when we get to our destination, or a restaurant if we are driving somewhere where we have not been before and are close to lunch time. The possibilities are many.

The problem with CarPlay is it relies on consumers upgrading their cars to one of the over 100 models available or integrating CarPlay kits — which range from just under $200 to over $700 depending on brand and quality. This is a steep price to pay when you are not quite sure what the return on your investment will be. Apple needs to find a way to lower that adoption barrier for CarPlay so as to speed up adoption. The more users experience CarPlay, the easier it will be to get them to take the next step when it comes to cars, whether an Apple-branded car or a fuller Apple experience in the car.

Galaxy Note 7: The Death of a Smartphone

It’s hard to imagine a much worse scenario.

The world’s leading smartphone company debuts a new device that initially is touted as one of the best smartphones ever made. Glowing reviews quickly follow and the company’s prospects for a strong fall and holiday season, and the opportunity for regaining some lost market share, seem nearly assured.

But then a small number of the phones start to overheat and catch fire. The company tries to react quickly and decisively to the concern and issues a recall of several million already shipped devices. It’s a somewhat risky and certainly expensive move, but the company initially receives praise for trying to tackle a challenging problem in a positive way.

Customers are reassured that the problem seems to lie not in the phone itself, but in a battery provided by one of the company’s third-party battery suppliers (ironically, most believe the culprit to be Samsung SDI—a sister company of Samsung Electronics).

And then, the unthinkable. Replacement phones start to show the same problems and the company is forced to stop the production and sale of the device, encourage its telco and retail partners to stop selling it, and tell all its existing customers to stop using it. Just to add insult to injury, the US Consumer Products Safety Commission (CPSC) also sends out notes to consumers encouraging them to stop using the device, while the Federal Aviation Administration (FAA) and major airlines around the world reinforce the message they’ve been saying for the last several weeks on virtually every airplane flight in the world: don’t use, charge or even turn on your Samsung Galaxy Note 7.

It’s probably the most negative publicity a tech product has ever seen. The long-term impact on the Samsung brand is still to be determined, but anyone who’s looked at the situation at all knows it can’t be good. At this point, it appears that the Note 7 will likely end up being removed from the market, costing Samsung billions of dollars, and there’s even been some concern expressed about Samsung’s ability to save/sustain the Note sub-brand.

Part of the issue isn’t just the product itself—although that’s certainly bad enough—but the manner in which the company is now handling it. Reaction has quickly moved from praise for Samsung’s initial quick efforts to address the issue, to disbelief that they could let a second round of faulty products that are this dangerous get out the door.

On top of that, there are many unanswered questions that need to be addressed. From a practical perspective, what is the cause of the problems if it isn’t the battery cell (the charging circuits?) and what other phones might face the same dangerous issues? Why did Samsung rush out the replacement units without actually figuring out what the real cause was? What kind of testing did they do (or not) to be sure the replacements were safe?

Beyond these short-term issues, there are also likely to be some bigger questions that could have a longer-term impact on the tech market. First, what types of procedures are in place to prevent this? What governmental or industry associations, if any, can take responsibility for this (besides Samsung)? Will products need to go through longer/more thorough testing procedures before they’re allowed on the market? Will product reviewers need to start doing safety tests before they can really make pronouncements on the quality/value of a product? How can vendors and their suppliers work to avoid these issues and what mechanisms do they have in place should it happen again to another product?

Some might argue that these questions are an over-reaction to a single product fault from a single vendor. And, to be fair to Samsung, there have certainly been reported cases of other fire and safety-related issues with electronics products from other vendors, including Apple, over the last few years.[pullquote]Our collective dependence on battery-driven devices is only growing, so it may be time to take a harder, more detailed look at safety-related testing and requirements.[/pullquote]

But when people’s lives and health are at stake—as they clearly have been with some of the reported Galaxy Note 7-related problems—it’s not unreasonable to question whether existing policies and procedures are sufficient. Our collective dependence on battery-driven devices is only growing, so it may be time to take a harder, more detailed look at safety-related testing and requirements.

Given the breakneck pace and highly competitive environment for battery-powered devices, there will likely be industry pushback against prolonged or more expensive testing. As the Galaxy Note 7 situation clearly illustrates, however, speed doesn’t always work when it comes to safety.

Finally, the tech industry needs to take a serious look at these issues themselves, and figure out potential methods of self-policing. If they don’t, and we start hearing a lot more stories about other devices exploding, catching on fire or causing bodily harm, you can be assured that some politician or governmental agency will use the collective news to start imposing much more challenging requirements.

As the old saying goes, better safe, than sorry.