The Solitary Inventor

The solitary inventor once typified innovation in America. Over successive generations we’ve read about individuals who labored on their own in their workshop or garage to create their inventions. One example was Bob Olodort, a friend and business associate that spent his working life developing a range of unique products, often facing skepticism and criticism along the way.

I met Bob in 1992 when I was working for Seiko Instruments, the Japanese company that developed small printers for commercial use, such as in gas pumps and point of sale devices. Olodort had reached out to Seiko, looking for a partner to create his invention, a small, single-purpose label printer that would print a single label from a computer on demand. It eliminated the need to use a sheet of labels to print just one or the awkwardness of feeding an envelope through a conventional printer. His tiny printer also used software to automatically recognize an address on the computer screen to print the label with just a couple of keystrokes, an early example of using software to enhance hardware performance.

Olodort faced a lot of skepticism. Critics ridiculed the use of a thermal printer and using labels on formal correspondence, suggesting it was unprofessional and looked like junk mail. They said thermal technology was not suitable. Yet, like most inventors, Bob was undeterred, and dismissed the criticism.

Eventually, Seiko licensed the product concept and created the Seiko Smart Label Printer. As a recent hire at Seiko, it became my job to work with Bob to develop the product from concept to manufacturing. It was a contentious project within a normally conservative Japanese-based organization. But it had the support from Seiko’s U.S. management, notably Hiroshi Fukino and John Rehfeld, who cleared the path for the product’s funding from Japan and its U.S. development oversight. It was Seiko’s first product created and manufactured by its U.S. division.

The product came to market and was well-received. Like many products, its customers found new things to do with it, including labeling file folders, creating bar code labels, and organizing Rolodex files. This showed how one individual with an idea and perseverance created an industry that had not existed before.

Bob continued to do more inventing. A few years later I got a call from him telling me he was working on a new idea, a full-size keyboard that could fold into a size so small it could fit into your pocket. This would be used with PDAs, which were an emerging product category at that time. His vision was to replicate the same experience of a ThinkPad notebook, the standard of excellence for keyboards.

I was skeptical. Keyboards are made up of hundreds of tiny parts including keys, actuating mechanisms, switches and springs. I just couldn’t envision such a product. When he showed me his first prototype, it was even more complex than I had imagined. It was a series of key switches all mounted on a structure that rotated each key in unison into a vertical orientation, collapsing the keyboard into a stack of keys. It looked like a manufacturing nightmare, but Bob was undeterred. Like most inventors, none of these objections got in the way of his vision. He focused on the value of the end product. Issues like manufacturability, cost, and complexity were not reasons to stop, but reasons to continue. They were just more challenges to solve.

Bob and I eventually formed a company, Think Outside, that spent the next two years developing and building the Stowaway, the first truly pocketable full-size keyboard. It was sold under numerous brands, including Palm, Targus, Sony, and Nokia, and became the most successful accessory for Palm PDAs. All told, about 3 million units were sold with third year sales reaching $40 million. It was named product of the year in 2000 by PC Magazine and is included the permanent design collection of the Museum of Modern Art.

Bob showed once again how his ability to pursue a single idea with tenacity, patience and optimism, that by ignoring skeptics, could create a new industry. It takes a unique individual who is willing to work alone, focus on the end result, and to plow through the day-to-day setbacks to accomplish what he did. Few of us can do it, because we look for outside reinforcement and acceptance by our peers. Most of us don’t have the traits that Bob and other inventors possess that create truly breakthrough products. Many of us would have trouble working alone for months on end, as opposed to being part of large organizations that provides social support, but also often discourage individuality and taking risks.

Earlier this month Bob passed away after a long illness. But he will be remembered for bringing delight to the millions of users of the Smart Label Printer and the Stowaway Folding Keyboard.

A personal note from Tim Bajarin

I had the opportunity to work with Bob Olodort on the Stowaway Away Keyboard. It was a marvel of ingenuity and to this day is still the best designed folding keyboard I have ever used. For Palm, it was a godsend. While most people did use the Palm Stylus with Grafitti for input, the Stowaway keyboard made it an even more versatile productivity tool.

In the few years I worked with Bob, it became clear to me that he was the consummate inventor who loved working on new ideas and technologies and was passionate about the creative process. He represented the solitary inventor and symbolized the thousands of similar tinkerers and inventors around the world that have given us so many of the products we have used in the past and into today. He will be missed by his family and friends and by those whose lives he touched in the world of technology.

Podcast: Microsoft and Netflix Earnings, Arm Flexible Licensing, FaceApp

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the quarterly results from Microsoft and Netflix, discussing Arm’s new Flexible Access IP licensing model, and debating the impact of FaceApp.

A Positive 2Q for PCs, Driven Partially by Tariff Fears

According to IDC’s preliminary results, the traditional PC market did better than expected during the second quarter of 2019. Shipments of notebooks, desktops, and workstations into the market grew about 4.7% year over year, to hit nearly 65 million units during the quarter. Typically, this would be good news for a market that has seen more than its fair share of struggles in recent years. Unfortunately, this unexpected growth wasn’t driven purely by market demand. Some of it was the result of vendors operating with a high level of fear, uncertainty, and doubt about the status of the U.S. trade war with China and the potential impact of an escalation that could lead to tariffs on finished PCs.

The Good News
The prelim results showed growth across a wide swath of countries and regions. Of note: Canada continued its 12-quarter growth streak with a whopping 11% year-over-year gain, Japan continued to grow its commercial segment, and India grew thanks in large to part a huge education tender that drove more than 1 million units during the quarter. The U.S. also returned to growth after a slow start to the year.

There were three primary positive drivers during the quarter: Shipments for the back-to-school season, an easing of supply constraints from Intel on processors, and the upcoming Windows 7 end of life. On January 14, 2020, Microsoft will no longer provide security updates or support for Win7 PCs. Unlike the Windows XP EOL, which drove massive shipment gains back in 2014, companies are much further down the road to moving off the old operating system. So while this transition drove some volumes in the second quarter, and will positively impact the second half of the year, we don’t expect to see the huge increases we saw around XP.

The Bad News
Unfortunately, at least part of the shipment volume increases in the second quarter was the direct result of the ongoing trade tensions between the U.S. and China. While this primarily impacted those two countries, the issue permeated the entire industry. In the U.S., it appears some vendors shipped higher-than-needed volumes into the channel because they feared the U.S. would implement its List 4 tariffs, which would directly impact finished PCs.

At present, we don’t have a great sense of just how much oversupply is in the system, and it certainly varies by vendor. Of course, we now know the U.S. administration didn’t implement this escalation during the quarter. However, the threat remains, and this could lead vendors to continue to overstuff channels to beat the system in the second half of the year. So we could see more artificial growth in the months to come.

In China, the impact of the tariffs has caused the opposite result: Commercial organizations are holding off on needed PC purchases due to the resulting slowdown in the Chinese economy. A large number of companies there are likely to continue to hold off on purchases while they wait to see how the rest of the trade war plays out.

Looking Ahead
Tim Bajarin recently wrote about tech manufacturing moving out of China, and that’s an ongoing topic of discussion throughout the PC market. PC vendors and their ODM partners are looking for options to move some or all of their manufacturing out of China, and some have already begun the process. These companies are looking at both new countries as well as places that were previously major producers (such as Taiwan). While many companies had been looking at moving some manufacturing out of China due to rising costs there, the trade war has forced these companies to divert resources toward speeding up this process. If a trade war escalation causes these companies to make drastic moves, the results could be a disruption in supply.

The other underlying challenge for manufacturers looking to move out of China is the fact that there’s a chance that the U.S. administration could, in turn, levy tariffs on additional countries. For example, some vendors have looked at moving some manufacturing to Mexico, only to have the administration threaten (and then shelve) tariffs on that country.

Ultimately, all of this uncertainty results in companies having to divert resources, which results in a negative drag on a PC market that’s been trying to return to sustainable growth for years. While the threat of tariffs may drive short-term shipment growth as companies try to “beat the clock” in a given quarter, oversupplying channels isn’t a sustainable business model. With no clear end to tensions between the U.S. and China in sight, expect these issues to continue to impact the PC market for the foreseeable future.

Netflix’s Miss and Looming Competition

There were mixed reactions to Netflix’s earnings yesterday, largely on the news they missed their own internal subscriber growth forecast as well as losing US subscribers for the first time. For the bears, this news signaled the trend they have been predicting. For the bulls, they buy Netflix’s story that the price hike and global climate were the reasons for the miss. Management is staying bullish for Q3 that they will hit their global growth estimate of 7m.

Services Potential for Seasonality
With all the pros of on-demand subscription services, a potential negative could be the seasonality. Particularly, the concern that any service is only driving interest and loyalty for a few shows, and after fans watch those shows, they cancel their subscription. This is the argument for HBO’s strategy to release shows weekly vs. Netflix’s to release a show all at once. The all at once, binge potential means a fan can subscribe, binge their show, then unsubscribe quite easily.

Looking at some metrics research, it looks as though HBO had just this seasonality dynamic hit them around Game of Thrones. New subscribers of HBO Now were up 53% in April and then steadied out the following months. This suggests consumers subscribed for GOT and then canceled when it was over. This was not a massive wave of people. However, it was enough to show up in the data tracking research.

Seasonality is something services like HBO Now, Netflix or any other subscription service need to worry about. It’s just as easy to cancel as it is to sign up, and if consumers are only interested in a few shows, then seasonality could become an issue. This should also concern Netflix given they are losing Disney, Friends, and the Office. As networks decide to create their own subscription services, it is likely they will not renew the deals with Netflix for their most popular shows in order to have them for their own subscription service.

Netflix understands this is, which is why they are investing in an attempt to have 1-2 new original content productions launch each month. Netflix knows it always needs to have something new for its customer to watch if it wants to become the dominant streaming entertainment service.

Looming Competition
Netflix’s letter to investors did not seem to be too concerned about competition. And I’d agree that was not a factor here, yet, and it is an open question as to how much competition impacts Netflix if at all. Early research from UBS suggests interest in Apple TV+ and Disney+ is extremely high. Forecasts for Disney Plus+ is to get to 60m by 2023. Netflix has 60m US subscribers right now, and that equates to roughly ~70 of US households. Basically, Disney+ will have the same number of US subscribers by 2023.

My conviction remains consumers are not going to ditch Netflix to subscribe to things like Apple TV+ or Disney+. These will all be additional subscriptions to Netflix as a part of a broader set of services consumers subscribe to. Our first research study on consumer subscriptions services revealed high interest to switch away from cable or satellite TV bundles. We found that 48% of US consumers are spending $80 a month or more on cable or satellite TV bundles. My conviction is consumers will shift that $80 or higher budget to other subscription services as they move away from cable bundles not look to switch to in order to lower costs.

The benefits of this are you get more value for the monthly money you spend since you have handpicked the things that matter most to you content-wise, when we looked at consumers, who had already canceled cable/satellite and moved to stream services on average spent more and subscribed to more services than those who still had a cable or satellite bundle.

Given Netflix’s high household penetration in the US, growth is going to need to come from other markets. However, the budget opening and upside for other services like Apple TV+ and Disney+ seem clear. The main question is which, and how many streaming services consumers will find value with, but I have little doubt Netflix will remain one of the primary ones.

Looking at Netflix data in Second Measure, which tracks a large portion of their user base via credit card transactions, it shows that overall Netflix’s customer retention is extremely high. Average transaction value is increasing, meaning even as Netflix raises prices, customers stay loyal. But it does show the last two years new customer growth in the US has slowed dramatically. Which does tell you they have nearly saturated the US market, and the only way to grow revenue in the region is to keep raising prices.

With the budget shift dynamics I explain, I’m not sure that will become as big of an issue as long as they keep their content frequency and quality high. For Netflix, they need to cement themselves as the service where most TV and movie content is consumed, and if they do that they have more share of wallet, they can acquire.

The Privacy Paradox

In the past few weeks, we were again reminded of the privacy paradox. Privacy, as a concept, sounds good, and people will always say it matters. Yet their behavior often contradicts their statement because for many privacy is not a core principle or decision guiding conviction. There is a subset of people for whom privacy is a guiding conviction, but for the majority of consumers, it is not.

FaceApp and Privacies Paradox
I wrote about FaceApp when it went viral a few years ago, putting it into the bucket of augmented reality. The app seems to have gone even more viral as of yesterday with people all over social media showing pictures of the apps feature that using some machine learning to estimate how you will look when you are old. I won’t go into why what they are doing is more like a parlor trick, but perhaps that is for a different article.

By the end of the day yesterday, I had personally seen the vast majority of friends on Instagram and Facebook post pictures of themselves as older people using this FaceApp feature. This happened fast and went viral, and a lot of people put their privacy at risk without even thinking about it. This is the privacy paradox.

The vast majority of the market will so easily compromise their privacy for social show, which means to join in on the social trend and show it off on social media. Yes, the feature was fun and got a few laughs, but the question of privacy was largely never raised I’m sure by many. I wonder how many people would have at least paused from using FaceApp if they had read the following excerpt from FaceApp’s privacy policy.

You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you. When you post or otherwise share User Content on or through our Services, you understand that your User Content and any associated information (such as your [username], location or profile photo) will be visible to the public.

You grant FaceApp consent to use the User Content, regardless of whether it includes an individual’s name, likeness, voice, or persona, sufficient to indicate the individual’s identity. By using the Services, you agree that the User Content may be used for commercial purposes. You further acknowledge that FaceApp’s use of the User Content for commercial purposes will not result in any injury to you or to any person you authorized to act on its behalf. You acknowledge that some of the Services are supported by advertising revenue and may display advertisements and promotions, and you hereby agree that FaceApp may place such advertising and promotions on the Services or on, about, or in conjunction with your User Content. The manner, mode, and extent of such advertising and promotions are subject to change without specific notice to you. You acknowledge that we may not always identify paid services, sponsored content, or commercial communications as such.

It’s actually not a bad privacy policy because it is so clear on what you agree to. I’ve seen so many that are quite vague because they don’t really want you to know what they are up to. FaceApps is pretty clear, and they are taking your data and your images and doing whatever they want with it.

As I said, had every person who just got the app to just try the age filter and then likely never use the app again (not delete it) read this part of the privacy terms I wonder how many would have still proceeded. I had the opportunity to ask a few friends and their families who were with me the last few days, and they looked at the privacy policy and had several responses. One of my well-intentioned friends read the policy and concluded that it was ok to use the app if he did not post the image to social media. Several others seemed to agree, while two of my other friends, both lawyers, said they would not use the app. Once I explained the image they took of themselves was being sent to FaceApp’s servers to process the filter and that image falls under the (they do what they want with it) part of the privacy policy, they all agreed they would not use the app with all that information.

So, a small sample, but one that demonstrates how even a clearly written privacy policy can still be misunderstood and misinterpreted.

Platform Owners Can Do More
The question then turns to what can software platform owners do to continue to give consumers all the information they need to make a decision. I understand this is a fine balance of how much information is too much to handle and do you run the risk no one downloads apps anymore if this gets too complicated. However, Apple, in particular, has been architecting their platform with security in mind, and even notifies a consumer when an app wants to use their location informing them of what that means so they can make an informed decision. I wonder if something like this needs to also apply to apps that use our photos. In particular, when an app like FaceApp asks for access to my photo roll, a reasonable thing to ask in case I want to use a previously taken picture from my camera roll.

Matt Panzarino from TechCrunch brings this up in a recent post.

One thing that FaceApp does do, however, is it uploads your photo to the cloud for processing. It does not do on-device processing like Apple’s first-party app does, and like it enables for third parties through its ML libraries and routines. This is not made clear to the user.

I have asked FaceApp why they don’t alert the user that the photo is processed in the cloud. I’ve also asked them whether they retain the photos.

Given how many screenshots people take of sensitive information like banking and whatnot, photo access is a bigger security risk than ever these days. With a scraper and optical character recognition tech, you could automatically turn up a huge amount of info way beyond ‘photos of people.’

So, overall, I think it is important that we think carefully about the safeguards put in place to protect photo archives and the motives and methods of the apps we give access to.

I’m not sure if it’s possible within iOS for an app, which has access to my camera roll, to upload all of them to a server in the background, but Matt’s point about using ML to scrape for banking information or anything else I may have taken a screenshot or picture of is a great one.

Honestly, the camera roll should be as sacred as location and treated as such by the operating system. I agree 100% with Matt that more safeguards need to be in place around the camera roll and I’ll look for Apple to lead here and hopefully start to address how to better inform their customers around privacy risks with their photos when trying an app like FaceApp.

Towards a More Inclusive Work Environment Thanks to Tech

Last week Dell Technologies, in collaboration with the Institute for the Future (IFTF) published a report exploring how emerging technologies could impact the work environment over the next decade.

The report highlights four technological areas: Collaborative AI, Multimodal Interfaces, Secure Distributed Ledgers, and Extended Reality. There is no question that these areas will have a big impact on our future, both from a B2B and B2C perspective. The extent of the effect on your overall business will depend on the vertical a company is in, but the impact that some of these technologies will have on hiring, training, and collaboration will affect every business no matter what market you are in.

When I think about the workplace, there are two areas that, in my view, are not just ready for change, but they are long overdue for it: talent discovery and retention and collaboration.

Talent Discovery and Retention

Let’s start with how intelligence can fuel better talent discovery and skillset matching.

Today we still live in a world where white-collar positions are matched with candidates by a head hunter or a human resource manager. As much as candidates spend time writing about hobbies and activities in their cover letter, very little attention is paid to it as, for the longest time, the initial screening has been mostly based on education and career. More recently, however, as our lives become more public, thanks to social media, candidates have seen their digital life being brought to the interview table mostly to be used as a measure of their character.

There is no question in my mind that the current system to find and match talent to the job to be done is long overdue for a change. Technology has the potential to help drive that change if companies are genuinely open to thinking differently. Take gender, is it indispensable to know what gender your candidate is? In most cases, I would argue it is not. What about ethnicity? Certainly not. But what about diversity and inclusion I hear you say? This is precisely the point. If you designed an algorithm to look at what skills are needed for a job rather than the qualifications so that the first step in the process is a blind one you will already start to widen the initial pool compared to what we have today. In the 2030 example given in the report that process would be followed by an interview with the candidate in an extended reality environment where the candidate chose to depict themselves in whichever way they saw fit, making gender and ethnicity much more fluid concepts.

The big concern with using AI is algorithm bias. For instance, if you tried to create a model that helped you identify the key requirements for a doctor and you used photos that depicted doctors over the year you will probably think that wearing a white coat and being a white male are key required characteristics. Addressing bias in AI start with addressing bias in ourselves and in particular in the people who are responsible for creating the ML models. The concern around bias in AI, however, should not distract us from the fact that the current process lead by humans is often biased.

We have yet to fully understand how the data that we all make publicly available can be used to create a more comprehensive portrait of potential employees. Using data from our digital life can play a role in highlighting the skills we have but also what drives us. I was looking at my own digital footprint through my social media presence, and there is a lot that comes through that would not make it into my work resume. You can probably find that I have developed many work relationships that I maintained over the years, I juggle family life and work and the two often cross, I try and build other women up, I am a bit of a workaholic, and I have an expressive driver work personality. Other activities we do from gaming to sports, to what movies we like, the books we read might also become a data pool that helps paint a fuller picture of who we are and what we bring to the job and how we fit the company culture. The same data could also be used to increase employees’ retention and engagement by using drivers that match what they most care about.

Collaboration

The other aspect that I am personally excited about when it comes to work and technology is collaboration. The challenges that growing real estate costs or lack of family support for families with children or elderly parents are posing, as well as an attempt from companies to diversify talent pools means that the workplace is becoming more spread out in a mix of office based and remote based employees.

Technology can help bring people closer together even when they are not in the same location. It is early days for augmented reality, and all the demos seem to be more work than they are worth it, and we are a long way away from the cool holograms in sci-fi movies. Other technologies, however, such as voice over and real-time translation are already improving the level of collaboration by making the workplace more inclusive whether you have a disability or you have a multi-language team working together.

Personal relationships are the foundation of good business and most people, no matter what countries they are based in, will tell you that nothing replaced an excellent face to face interaction. Yet, when traveling to get together is not an option, we will be able to count on technology to bring us closer together. The market for collaboration software has been on the rise for quite some time and some are forecasting it to reach $60 billion by 2023. Just because we cannot be physically in the same room it does not mean we cannot collaborate in a natural and productive way.

 

When we talk about technology and work, the focus tends to be on many of the adverse effects people fear technology will have on their jobs. We often read about AI and automation taking our jobs, or we learn about the lack of skills the workforce will have in this new tech-driven world. Such a change will not come overnight. As much as the scenarios painted in the Future of Work report are exciting, they will be the result of a series of small steps taken over the next ten years. For both the positive and the negative, when we consider truly disrupting technologies such as AI, automation or crypto we get excited or concerned about what our world will be in ten years time and we do not look enough at how technology will change our world tomorrow and every day after that till we get to 2030. Futurists do their job in showing us what the future will be, technology companies do their job in making it a reality and us humans, being at work or at home, are ultimately the ones who will embrace or reject such change.

 

The Evolution of Portable Entertainment

As a technology analyst and consultant, I have had the chance of not only studying and chronicling the world of technology but, once in a while, I get involved in a consulting project that I feel could have a specific impact on the world of Tech.

One such project I was involved with was one of the first MP3 media players brought to market by Diamond Multimedia in 1988. I was brought into the project by a friend of mine who had moved from Apple to Diamond as he saw the idea of an MP3 player being the next big thing in portable music. This was a few years before Apple introduced the iPod. At the time, the leading mobile music player was the Sony Walkman. Also around that time, Napster came to market with its digital music ripping system, and my friend and his boss foresaw a need for a portable MP3 music player, so Diamond harnessed this idea and created RIO PMP300.

But the Recording Industry of America (RIAA) was not happy about this product.

Wikepdia has the details of RIAA’s response to the RIO PMP300:

“On October 8, 1998, the Recording Industry Association of America, filed suit and asked for a temporary restraining order to prevent the sale of the Rio player in the Central District Court of California, claiming the player violated the 1992 Audio Home Recording Act. See RIAA v. Diamond Multimedia.

Judge Andrea Collins issued the temporary order on October 16 but required the RIAA to post a $500,000 bond that would be used to compensate Diamond for damages incurred in the delay if Diamond eventually prevailed in court. Diamond then announced that it would temporarily delay shipment of the Rio.

On October 26, Judge Collins denied the RIAA’s application.[3] [4] On appeal, the Ninth Circuit held that the Rio’s space shifting was fair use and not a copyright infringement.[5]

After the lawsuit ended, Diamond sold 200,000 players.[6]”

The chart below illustrates the history of portable music from the days of the Walkman to today. Interestingly, it does not include battery-powered Boombox’s, which could also be portable, as well as portable radios that go back to the 1940s that teenager’s danced too well before the Sony Walkman came onto the scene.

Apple’s iPod was the real game-changer because it made it easy to get ripped songs onto a portable music device. That was one of the big flaws of Diamond’s RIO PMP300. Getting ripped songs onto it was very difficult. Apple created software that ran on the Mac that would allow you to copy your music from a CD easily and then transfer it to the iPod. This eventually led to Apple creating a dedicated music store for direct downloads and along with Apple supporting this process on a Windows PC too, the iPod took off and became the top portable digital music player for almost a decade.

With the iPhone, Apple created the next major portable music platform that has now eclipsed the need for a dedicated MP3 player and, along with streaming music services, smartphones have emerged as the go-to portable media player today.

In 1981, the music industry, via MTV, launched an important milestone for music performers. It helped birth what has become the music video and allowed music artists around the world to add video to enhance their music performances. Music videos are now part of the portable music scene since most smartphones support streaming video along with streaming music.

While the iPhone and smartphones, in general, are the current portable music player for most people, I believe that the next big evolutionary leap in delivering digital music will come in two important steps.

The first will be to deliver the actual music player in headphones and earbuds themselves. Today, people use Bluetooth radios to deliver music from a smartphone to headsets or earbuds. But I have recently seen some work in the labs around building the entire streaming wireless music delivery system into these headsets and earbuds as well. This would eliminate the need to carry a smartphone to get that music and only have to have the headset or earbuds.

We actually have had headsets with AM and FM radios in them for over three decades. And some headphones today can host an SD card with recorded music too. For example, Sony and others even have headphones with an MP3 player in them that has 4GB of storage, and you can download recorded music for mobile playback.

One can even deliver stored music via a smartwatch to wireless earbuds as Apple allows with the Apple Watch and the iPhone. But the idea of creating a smart earbud or headset with a cellular chip that can access streaming music and audiobooks on demand is an interesting next step in portable music delivery.

The second step will be to integrate music videos into the portable music experience beyond what you can get via a smartphone. That will come via AR and VR glasses or goggles.

If you have seen Apple’s AR examples of how a person could be inserted into a game, you get the idea of what is around the corner with music concerts and videos. The work that is going on with VR and AR could eventually allow a person to be virtually transported into a music performance to be able to be in the mosh pit at a concert or to dance with folks watching a band perform.

AR headsets could also enhance live music concerts. Imagine wearing a set of AR glasses at a concert and seeing the lyrics in front of you as the band performs. Or you could ask via the headset for information on the band performing as well as historical information about a song they may have on their setlist.

A recent article in Virtual Reality Pop shared a few examples of how AR and VR is being experimented within the music industry-

Videos and Live Performances
Not surprisingly, a large number of VR and AR startups are attempting to gain traction in the live music industry. Although I can’t touch on all of them in a single article, here’s a selection of the companies jumping into the video and live performance spheres.

Within has entered into a deal with Universal Music Group to develop VR and AR experiences for some of the artists on its roster. The Chemical Brothers and St Vincent were the first from UMG to work with Within, developing a creative and interactive music experience called Under Neon Lights.

MelodyVR is a London-based company focused on its goal of offering live streaming of concerts in Virtual Reality. Although its full vision has yet to come to fruition, the company has worked with more than 650 international artists, including Post Malone, Blake Shelton, The Who, Kiss, and The Chainsmokers to develop innovative uses of VR in a live music setting, with the hope that VR technology will soon be widely adopted by consumers.
Facebook, in conjunction with Oculus Go and Gear VR, launched Oculus Venues live events last year with an initial Vance Joy concert. Oculus Go is a relatively affordable VR headset at $199 in the U.S. and offers the convenience of not being tethered to a computer. Although it’s early in the game, the potential for this offering to gain significant traction among concert-goers is an exciting move in the direction of the mass adoption of VR and AR in the world of music.

I believe that the integration of a VR and AR experience through smart glasses and smart goggles is not only where we are headed but has the potential of creating a whole set of new experiences that makes music more personal and interactive. This appears to be the next big portable music platform, and it should be fun to watch it develop over the next few years.

Changes to Arm Licensing Model Add Flexibility for IoT

It’s tough enough when you have a business model that not a lot of people understand, but then when you make some adjustments to it, well, let’s just say it’s easy for people to potentially get confused.

Yet, that’s exactly the position that Arm could find themselves in today, as news of some additional offerings to their semiconductor IP licensing model are announced. But that needn’t be the case, because the changes are actually pretty straightforward and, more importantly, offer some interesting new opportunities for non-traditional tech companies to get involved in designing their own chips.

To start with, it’s important to understand the basic ideas behind what Arm does and what they offer. For over 28 years, the company has been in the business of designing chip architectures and then licensing those designs in the form of intellectual property (IP) to other companies (like Apple, Qualcomm, Samsung, etc.), who in turn take those designs as a basis for their own chips, which they then manufacture through their semiconductor manufacturing partners. So, Arm doesn’t make chips, nor are they a fabless semiconductor company that works with chip foundries like TSMC, Global Foundries, or Samsung Foundry to manufacture their own chips. Arm is actually two steps removed from the process.

In spite of that seemingly distant relationship to finished goods, however, Arm’s designs are incredibly influential. In fact, it’s generally accepted that over 95% of today’s smartphones are based on an Arm CPU design. On top of that, Arm-based CPUs have begun to make inroads in PCs (Qualcomm’s chips for Always Connected PCs, sometimes called Windows on Snapdragon, are based on Arm), servers, and even high-performance computing systems from companies like Cray (recently purchased by HP Enterprise). Plus, Arm designs more than just CPUs. They also have designs for GPUs, DSPs, Bluetooth/WiFi and other communications protocols, chip interconnect, security, and much more. All told, the company likes to point out that 100 billion chips based on its various designs shipped in the first 26 years of its existence, and the next 100 billion are expected to ship between 2017 and 2021.

Part of the reason they expect to be able to reach that number is the explosive growth predictions for smart connected devices—the Internet of Things (IoT)—and those devices’ need for some type of computing power. While many of the chips powering those devices will be designed and sold by their existing semiconductor company clients, Arm has also recognized that many of the chips are starting to be put together by companies that aren’t traditional tech vendors.

From manufacturers of home appliances and industrial machines, to medical device makers and beyond, there are a large number of companies that are new to smart devices and have begun to show interest in their own chip designs. While some of them will just leverage off-the-shelf chip designs from existing semi companies, many of them have very specific needs that can best be met—either technically, financially, or both—with a custom designed chip. Up until now, however, these companies have had to choose which pieces of Arm IP that they wanted to license before they created their own chip. Needless to say, that business model discouraged experimentation and didn’t provide these types of companies with the options they needed.

Hence the launch of Arm’s new Flexible Access licensing model, which will now let companies choose from a huge range (though not all) of Arm’s IP options, experiment with and model chip designs via Arm’s software tools, and then pay for whatever IP they end up using—all while receiving technical support from Arm. It’s clearly an easier model for companies that are new to SOC and chip design to make sense of, and it essentially provides a “chip IP as a service” type of business offering for those who are interested. However, Arm will still offer their traditional licensing methods for companies that want to continue working the way they have been. Also, Arm’s highest performing chip designs, such as their Cortex-A7x line of CPUs, will only be available to those who use the existing licensing methods, under the presumption that companies who want that level of computing power know exactly what they’re looking for and don’t need a Flexible Access type of approach.

For those who don’t follow the semiconductor market closely, the Arm chip IP business can certainly be confusing, but with this new option, they’re making a significant portion of their IP library available to a wider audience of potential customers. And that’s bound to drive the creation of some interesting new chip designs and products based on them.

Services are Key to Apple’s Emerging Market Strategy

From a hardware standpoint, Apple has appeared to have saturated their developed markets. While there can still be some minimal growth (low-single digits) for the foreseeable future, there is not a great deal of hardware growth ahead for Apple. When I discuss this with investors, and even contacts of mine in the supply chain related to Apple products, they question of Apple’s growth in emerging markets continues to come up.

On this matter, and by emerging markets I mean markets like India, SE Asia, and someday parts of Africa, I have concluded hardware growth will be much harder to come by than Apple, and others realize. With India being pegged as one of the larger short term growth opportunities for Apple, I’ll make some broader points specific to India in this analysis.

Apple’s Hardware Challenge in India
There are several important factors to understand when thinking about Apple’s iPhone strategy in India. Firstly, Apple has an extremely low market share in India. Most estimates have iPhone share at less than 10% of the installed base, but it is likely lower than 5%. There are roughly ~400 million smartphone owners in India and that number is expected to pass 700m by 2022. It is a market with as sizable as China but with extremely different cultural constructs which will make Apple’s hardware positioning much more difficult in India than it has been in China.

Price is much more of a factor in India than it is in China, and Apple’s strategy has never been to compete on price. I don’t expect a change of strategy here, which is why I’m less optimistic about Apple’s hardware growth strategy in India. Also, India is a market Google/Android has a stronghold on. With >95% share, India cut their teeth on Android and has been continuing to deepen their dependence on the Google ecosystem. I view the challenge here somewhat similar to Apple’s challenge to penetrate the greater Android base in the US. While Apple has seen favorable switching rates at times, the reality is not being on Verizon day one with the iPhone launch let Android gain a foothold in the US and clawing that back has been a great challenge. For that reason, the US market largely remains a 50-50 split of smartphone OS between iOS and Android. The last point I’ll make here is multiple global smartphone reports I’ve read indicated Android itself has an extremely high loyalty rate. While no Android branded phone has iPhone level loyalty rates, Android itself does with varying loyalty rates between 78-83% depending on the market.

While it is true, Apple is working to be a bit more price-competitive by manufacturing phones in India, they will still be competing against phones with similar specs and lower prices from other Android brands. This, plus the high Android loyalty rates in India in particular, is going to make gaining switchers a challenge.

Services, not Hardware, Is Apple’s Revenue Opportunity in India
Landing on the conclusion that hardware is not an immediate hardware opportunity for Apple, I believe services is the right strategy for Apple to start to develop customer relationships in India. My thesis here is much like the iPod strategy. A little appreciated strategic factor for Appel’s iPod was how it was the mass market entry product to the Apple experience. Up to that point in time, most consumers had never owned an Apple product and thus never had a chance to experience the quality of Apple product and the overall Apple experience. The iPod paved the way for the iPhone for the mass market by offering customers an easy way to enter the Apple ecosystem and experience the Apple brand.

In India, I think services can play a similar role. While I explained how hardware is going to be challenging for Apple in India, I think services could be a much easier sell. A fascinating dynamic of India, today, is how a large part of the Indian culture is to value frugality. Or more specifically, they value the deal and the pursuit of getting value for their money. The iPhone is not well-positioned in India in the value for the money equation; however, Indian’s value media in a much more balanced way than they value physical goods. In some cases, they value rich media more highly than they do physical goods. Bollywood itself is a great example, but broadly speaking Indian’s desire for media content has always been high.

In the early days of cheap tablets, I recall hearing stories from friends who live in India, how a trend was emerging for Indian consumers to purchase extremely cheap tablets, less than $70 USD, and then go to a corner store and pay nearly half the price of the hardware to load their tablet up with movies, games, and other video content. This is a great example to highlight how the value equation between hardware and rich media is quite different in India than other parts of the world.

This is a key part of my thesis as to why I think Apple has a broader short term revenue opportunity in India with their services than with hardware. However, doing this may require a bit of a strategy shift for Apple as it relates to emerging markets.

Cross Platform and Regionalization
American media, particularly movies, are popular globally. However, any effective services strategy requires regionalization and customization of content to the region. This may be even more important in India than in China. However, China is similarly growing in its localized media. While we do not know much about Apple’s cross-platform strategy, for Apple TV+ specifically, it is unlikely said service is available on Android out of the gate.

This is a fantastic debate to have, whether Apple needs to embrace Android with all their services sooner than later, but not one I’ll flesh out here. What I will say is that Apple has no chance with services in India if it is not available on Android. This is critical in my thesis that Apple’s services is their way into India strategically, and would require a bit of an India specific approach assuming Apple is hesitant to bring all their services to Android in other markets.

Apple would also need to continue to invest in India specific media, and if this is done right, it could be seen as hugely valuable by the Indian market.

While I understand the broader argument that brining Apple services to India could hurt their hardware strategy, there are serval ways to think about this point. Firstly, if Apple is extremely strategic here, they can create an experience that hopefully has a path back to their hardware from a services entry point. This does not mean to cripple the experience with Apple services on Android fully, but that they create some experiences that may still be better on Apple hardware. This could be things around AI/ML, or deeper integration and ease of use, etc. The second way to think about this is services being cross-platform is Apple’s opportunity to get a customer who may never have been a customer, to begin with. In every market, there is a huge opportunity for Apple to sell their services to customers who they will likely never get as hardware customers. When you think about the broader services business opportunity, and the potential for pure growth for Apple, the reasons to be a cross-platform start to fall away. Especially if there is a clear strategy to bring a hardware value proposition on the back of those services.

Ultimately, this last point is the one of most interest to me in an era of flat to slightly declining hardware revenue for Apple and a much larger customer opportunity with Apple’s services. How Apple plays the cross-platform game will be a critical part of their growth strategy, but one that does need to be attached to a long-term hardware roadmap.

Podcast: Intel Chiplet Technology, T-Mobile, Sprint, Dish and 5G Carriers

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing some of the latest semiconductor packaging technology announcements from Intel and what they mean for the overall evolution of “chiplets” and the semiconductor industry in general, and debating the potential impact of Dish’s involvement with a potential merger between T-Mobile and Sprint and what it says about the current state of 5G networks in the US.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Industrial Design and Operational Excellence is at the heart of Apple’s Success

I have been fascinated by the various doomsayers who were apoplectic about Apple’s Industrial design team now reporting to COO Jeff Williams and not directly to Apple CEO Tim Cook. They seem to think that great industrial design alone can keep Apple humming and growing.

About every five years, I write a column that attempts to explain how Apple developed its way of thinking and strategies. That last time I did a column like this was for Fortune Magazine in April of 2017, where I shared how one particular thing that helped me understand Apple’s Strategy.

If you have time, I encourage you to read this column as it lays out the key principles of Apple’s success. However, in this article, I did not have enough space to add another piece of the puzzle that makes Apple so successful, and that is it world-class manufacturing and operations.

In discussion with Steve Jobs before the iPhone came out, he told a group of us that one of the reasons Apple had been growing could be directly attributed to Tim Cook’s masterful revamping of their operations and manufacturing. He pointed out that designing the product was only half of the project’s success, and unless you could manufacture it cost-effectively, efficiently and in large quantities, it would never have a chance to succeed.

We now know that when Steve Jobs came back to Apple in 1997, he found that Apple’s manufacturing and operations were very poor and gave the task of updating this part of their business to Tim Cook. From 1997 to about 2010, Cook was the genius behind developing Apple’s world-class manufacturing lines and its overall operations that keeps Apple humming and products delivered on time.

Although Cook is now CEO, he is still a master of operations, and Jeff Williams reports directly to him now. Williams has been trained by Cook, and their manufacturing and operations under him is still a world-class program.

When I read that with Jony Ives leaving, and the Industrial Design team would now report to COO Jeff Williams, given what I know about how Apple Works, I saw this as a natural way to manage this transition. As Jobs suggested, the design is only half the equation for success and works if you can couple it to world-class manufacturing.

In Jean Louie Gassee’s popular Monday Note, he echos this idea that ID and operations go hand in hand and he puts this idea into perspective in this weeks post:

“A more serious concern among the commentators is Apple’s new org chart. With Ive’s departure, Apple Design no longer reports to the CEO, but to the COO of Operations. As voiced by John Gruber in his eminently readable Daring Fireball blog:

“…when Jobs was at the helm, all design decisions were going through someone with great taste. Not perfect taste, but great taste. But the other part of what made Jobs such a great leader is that he could recognize bad decisions, sooner rather than later, and get them fixed.”
Gruber is hardly a doomsayer; he offers that Ive’s departure “may be good news” for Apple. Others, who are less sanguine about the re-org, foresee the Dark Ages:

“The design team is made up of the most creative people, but now there is an operations barrier that wasn’t there before,” one former Apple executive said. “People are scared to be innovative.”

This is silly, and belies a misunderstanding. There’s a difference between the traditional, personal, “artistic” design (and taste) that presides over the composition of, say, a 15th century Botticelli painting, and the Industrial Design (ID) that’s practiced today by any successful hardware company.

Industrial Design goes beyond the fit between form and function that we think of as good design, ID makes sure the product — cars, typewriters, iPhones — can be manufactured in large quantities, meeting cost and reliability targets.

Ive is a living representative of the relatively new lineage of industrial designers, of artists and engineers who understand that to design a product means taking care of the Look and Feel and the operational factors that are required to deliver their wares in extremely large quantities, on time, while meeting cost and reliability targets.”

As Gasee points out, great industrial design goes hand in hand with world-class manufacturing. In this case, putting the industrial design team under COO Jeff Williams not only makes sense but will be critical for Apple to continue to develop great products and make them in the kind of quality and quantities that their customers demand. That is why putting this team under Williams, and not Cook is so important for Apple’s future. Apple’s leadership knows why the two disciplines need to be coupled together and united in their approach to designing and making Apple hardware-based products.

Bumble gives Power to Women in Dating but not to Founder in Business

This week Forbes published an exclusive investigation on Andrey Andreev and the work culture at dating company Badoo. If you are based in the US, you might not be as familiar with Badoo as you are with Tinder, but Badoo is a very popular dating app in Europe and Latin-America.  The investigation uncovered a culture of racism and sexism that former Badoo employees claim was coming from the top down despite founder Andrey Andreev denying any knowledge of wild parties and general inappropriate behavior, to put it mildly. Of course, this is not the first time we hear about wild behavior at a tech company, so in a way, this is not really news.

Badoo employes’ inappropriate behavior did not stop at parties offering drugs and sex. Jessica Powell, Badoo’s CMO between 2011 and 2012, told Forbes that misogynistic and racist behavior was routine. She was asked, “to act pretty for investors and make job candidates ‘horny’ to work for Badoo.” She also added that “female employees were routinely discussed in terms of their appearance.” Sadly, this is also not the first time we heard of misogynistic mindsets that shape apps and services in the dating world, like in the case of dating agent ViDA and the code of conduct that its founder promoted at every level of the organization.

So, if we heard it all before, why am I covering this? The article initially sparked my interest promptly moved me to a sense of aversion when I read that Andrey Andreev is behind Bumble the dating up that focuses on putting women first in dating. I had always linked Bumble to Whitney Wolfe Herd, the former Tinder co-founder who left the company after a sexual harassment case that was settled privately but that still ended in her losing her co-founder title.

It turns out Andreev was actually the one who approached Wolfe Herd with the idea for Bumble. He also put up the capital for the company, and he is still the majority owner. According to the article, he is far from being a silent partner, maintaining control of most operations in London despite Bumble being headquartered in Austin and keeping Wolfe Herd just a phone call away.

Hypocrisy or Need

Wolfe Herd might not have known the true colors of Andreev when she was first approached. What is puzzling to me is that Wolfe Herd told Forbes that she has never witnessed toxic behavior in the Badoo headquarters, and she stands firmly behind Andreev going as far as adding: “He’s become my family and one of my best friends.” Of course, everybody is innocent till proven guilty, but given the nature of Bumble, I would think one might want, at a minimum, to assure some due diligence.

Of course, Wolfe Herd owes a lot to Andreev and cutting ties now would have a significant business impact, but doesn’t Wolfe Herd owe something to Bumble supporters too? How can you advocate being about women empowerment when it comes to dating but decide to turn a blind eye to business practices that are, at the very least, condoned by your majority investor? Serena Williams, who was featured int Bumble’s Super Bowl commercial, might rethink her backing of what she thought was a female founder business who put women first. And Bumble users might decide they do not want to contribute to the financial gain of hypocritical leaders. I, for one, feel cheated after having praised Bumble efforts in the past.

Women Businesses and Women Investors

If we give Wolfe Herd the benefit of the doubt, we might also think that a female owner might be less likely to come with the same baggage, so why not look for an alternative? Maybe because it is easier said than done. Looking at headline statistics, for example, women own only 5 percent of tech startups, women hold just 11 percent of executive positions at Silicon Valley companies, only 7 percent of partners at top 100 venture capital firms are women. Furthermore, like women across the board suffer from a pay gap, female founders receive less funding than their male counterparts. According to the Financial Times, in 2016, $58.2 billion worth of VC money went to companies with all-male founders. But last year, women got just $1.46 billion in VC money.

Of course, a female founder or CEO does not guarantee that the company will suffer no misogyny, racism, or other deplorable behavior, look at Theranos and Elizabeth Holmes. But I would hope that if you are building a business centered on women in whichever aspect of their lives having women as a crucial part of the organization would, at a minimum, offer a higher degree of first-hand understanding.

Empowering Women it not the same as Monetizing from Women

The reality is, however, that these are businesses, not charities. As much as I would like to think that Bumble was really started because someone thought it was time women made the first move in dating the reality is much more mercenary than that. Andreev saw an opportunity to target those women who feel pressured or intimidated by men making the first move. Or maybe the opportunity was for men not wanting to feel the pressure of the first move. Either way, Bumble was more about an untapped market opportunity than about women empowerment.

Empowering women to make the first move in a dating app is important, but what would truly be revolutionary would be creating a business environment where women are paid the same as their male colleagues, where they feel free to speak up because they will be heard, supported, and lifted. This is what Wolfe Herd should aspire to do after this week’s revelations remembering that as Madeleine Albright once said: there is a special place in hell for women who don’t support other women

Superhuman, Startups and Privacy, Ethical Product Design

Last week a company called Superhuman made some news when they publicly corrected course on some features of their email solution that raised public concern over privacy.

If you missed the story last week, I would not be surprised. Nor would I be surprised if many of you had not heard of Superhuman before. The company built an email solution, which is only available via invitation currently, that claims to give you email superpowers. A brief blurb on its website highlights the following features:

Superhuman is gorgeous. Blazingly fast. And comes with advanced features that make you feel superhuman. A.I. Triage. Undo Send. Insights from social networks. Follow-up Reminders, Scheduled Messages, and Read Statuses. To name but a few.

When early users on Twitter started talking about Superhuman and their fondness of the product, there were criticisms of the high price for an email being given they charge $30 a month for the service. But, those early customers defended the price given how much they appreciated the time saving and efficiency Superhuman provided them. Most of these early users were Superhuman investors and friends of the investor community. There are two features, in particular, that got the most attention: location tracking and read receipts.

Location tracking gave users of Superhuman the ability to see where the recipients of their email location are at the time of opening the email. In their blog post, they said they addressed the location tracking and immediately disabled it. They said they did not consider how this could be used by bad actors. Which is a point I want to address shortly?

The second was read-receipts, which gave the sender the ability to see not just that you have opened their email but also how many times. Here again, the superhuman has addressed this feature making read-receipts off by default but still allowing a user to turn this feature on if they want. The key point here is the recipient has no way of knowing they are being tracked, or that their read statuses are being sent and no way for them to opt-out of this feature.

There are a number of interesting points to be made here. The primary one, in my mind, is how the product design led to these features that focused much more on the potential customer than the person at the other end of that customer’s experience. Note this statement from Superhuman’s CEO Rahul Vora.

We take the second criticism to heart too. It made sense for read statuses to be on by default when our user base was early adopters. They knew exactly what they were buying and were excited to buy it.

On this point, I thought this tweet from Josh Constantine was apt.

In the blog post highlighting the criticisms and the changes they were making to Superhuman, Rahul defends the read receipts feature by giving examples of several other power user email programs which also enable this feature and have cited use cases like sales follow-ups and customer support. The bigger question here is the moral imperative of the feature, not the other examples which enable it as justification for the feature.

The biggest point here is that the end-user can not opt-out of being tracked and, furthermore, has no idea they are being tracked. While customer service or sales follow up sounds like a valid reason for this feature, I certainly don’t want to get an email from marketers or salespeople saying they know I’ve looked at their email 5 times and are wondering why I haven’t responded.

Ethical Product Design
Given the bright light Apple has shined on privacy, I think it is safe to assume that product design going forward is likely going to have to be thought through form a more ethical standpoint. I make this point in particular for the startup community, as exampled by Superhuman, that the entire end to end experience of consumer privacy needs to be thought through.

This is a new wrinkle and one that I’m not seeing be addressed in many of the consumer startup pitch decks I get to see from my work with angel investors and the broader VC community. The idea of ethical product design may take some time to take root fully, but I have no doubt we are on the cusp of a change in thinking when it comes to overall product design and consumer privacy. Perhaps this is something that needs to start being included in business school and the broader educational system. It is also something that I think many VCs need to grasp when it comes to their investments in both the consumer and the enterprise.

The Power of the Consumer Voice
The last point I want to make is around the power of the consumer’s voice. Interestingly, the social pressure Superhuman felt did not come from mass media coverage criticizing their features but from an outpouring on Twitter of people highlighting how these features are an invasion of their privacy. The power of Twitter to enable the voice of the consumer and pressure a company to make a change was on high-display last week with this situation with Superhuman, how is a little known startup.

If the social outcry on Twitter, for a little known startup, was able to hold them accountable and drive change than just imagine when something like this happens to bigger more well-known companies. Essentially, Twitter has proven to be a powerful enabler of the consumer’s voice and a tool for holding institutions accountable. Despite one’s opinion of Twitter, the ability for a collective outcry to be amplified in one place is a benefit, particularly in a situation like the one with Superhuman and driving positive change.

Intel Highlights Chiplet Advances

Talk to anybody in the semiconductor industry these days and all they seem to want to talk about is chiplets, the latest development in SOC (system on chip) designs. The basic rationale behind chiplets is that several different developments are making the industry’s traditional method of building increasingly larger chips less appealing, both technically and financially. So, instead of designing sophisticated, monolithic chips that incorporate all the important elements on a single silicon die, major semiconductor companies are designing products that break the larger designs into smaller pieces (hence “chiplets”) and combine them in clever ways.

What makes chiplet design different from other SOC design methodologies that have existed for many years is that many of these new chiplet-based parts are putting together pieces that are made on different process technologies. So, for example, a chiplet design might link a 7 or 10 nm CPU with a 14 nm or 22nm I/O element over some type of high-speed internal interconnect.

The reason for making these kinds of changes gets to the very heart of some of the transformational developments now impacting the semiconductor business. First, as has been widely discussed, traditional Moore’s Law advancements in shrinking transistor size have slowed down tremendously, making it difficult (and very expensive) to move all the elements inside a monolithic chip design down to smaller process geometries. Plus, even more importantly, it turns out that some important elements in today’s chip designs, such as analog-based I/O and some memory technologies, actually perform worse (or simply the same, but at a significantly higher cost) in smaller-sized chips. Therefore, some semiconductor components are better off staying at larger process manufacturing sizes. In addition, the processing requirements for different types of workloads (such as AI acceleration) are expanding, leading to the need to combine even more types of processing technology onto a single component. Finally, there have been some important advancements in chip packaging and interconnect technologies that are making the process of building these multi-part chiplets more efficient.

Most large chip companies have recognized the importance of these trends and have been working on advancing their various chiplet-related technologies for the last several years. To that end, Intel just announced some important new additions to its arsenal of chip packaging capabilities at the Semicon West conference this week, all designed to enable even more sophisticated, more flexible, and better yielding chiplet-based products in the years to come. At past events, Intel has talked about its EMIB (Embedded Multi-die Interconnect Bridge) technology, which provides horizontal, or 2D, connections across different chiplet elements. They’ve also talked about Foveros, which is their 3D stacking technology for putting multiple elements in a chip design on top of each other. The latest development is a logical combination of the two, which they call Co-EMIB, that enables both 2D-horizontal and 3D-vertical connections of components in a single package.

In order to efficiently deliver power and data to these various components, Intel also developed a technology called ODI (Omni-Directional Interconnect), which works through and across chips to provide the power and low latency connections needed to perform closer to monolithic chip designs. Finally, the company also announced a new version of their AIB (Advanced Interface Bus) standard called MDIO that provides the physical layer connect for die-to-die connections used in EMIB.

Together, the new advances give Intel more flexibility and capability to build increasingly sophisticated chiplet-based products—the real fruits of which we should start to see later this year and for several years to come. In addition, these developments help to address some of the challenges that still face chiplets, and they should (hopefully) help to drive more interoperability across multiple vendors. For example, even though the interconnect speeds across chiplets are getting faster, they still don’t quite meet the performance that monolithic designs offer, which is why a technology like ODI is important.

In terms of interoperability, there have been some notable examples of chiplet designs that combine pieces from different vendors, notably the Kaby Lake G, which combines an Intel CPU core from Intel’s 14nm+ process with an AMD GPU built on Global Foundries 14 nm, along with HBM (High Bandwidth Memory). However, right now more vendors are focused on their own inter-chip connection technologies (NVLink for Nvidia, Infinity Fabric for AMD, etc.), although there have also been some industry-wide efforts, such as CCIX, Gen-Z and OpenCapi. Still, the industry is a very long way away from having a true chip-to-chip interconnect standard that would allow companies to use a Lego-like approach to piece together chiplets from whatever processor, accelerator, I/O, or memory elements they would like.

Practically speaking, Intel recognizes the need to drive open standards in this regard, and they have made their AIB (and now, MDIO) standards available to others in an effort to help drive this advancement. Whether or not it will have any real-world impact remains to be seen, but it is an important step in the right direction. Particularly in the world of AI-specific accelerators, many companies are working to create their own chip designs that, ideally, could dramatically benefit from being combined with other components from the larger semiconductor players into unique chiplet packages.

At Baidu’s Create AI developer conference in China last week, for example, Intel talked about working with Baidu on Intel’s own Nervana-based NNP-T neural network training processors. Baidu has also publicly talked about its own AI accelerator chip called Kunlun (first introduced at last year’s Create conference), and although nothing was said, a logical connection would be to have future (or more likely, custom) versions of the NNP-T boards that incorporate Kunlun processors in a chiplet-like design.

Though they represent a significant diversion from traditional semiconductor advances, it’s become abundantly clear that the future of the semiconductor industry is going to be driven by chiplets. From this week’s official launch of AMD’s 3rd generation Ryzen CPUs—which are based on chiplet design principles that interconnect multiple CPU cores—to future announcements from Intel, AMD, Nvidia and many others, there’s no question that the flexibility that chiplets enable is going to be critically important for advancements in semiconductors and computing overall. In fact, while there’s no doubt that improvements in process technologies and chip architectures will continue to be important, it’s equally true that advances in the previously arcane worlds of chip packaging and interconnect are going to be essential to the advancement of the semiconductor industry as well.

Tech Manufacturing Moving Out of China at Rapid Rate

I spoke last week with one of my friends who is in Vietnam about the current economic trend there that is helping prop up the Vietnamese economy.

Although not known as a tech manufacturing powerhouse yet, it has become one of the major countries around the world that could evolve to replace some of the Chinese Manufacturers that are caught in a tariff battle with the US today.

According to my friend in Viet Nam, this country sees a huge opportunity to steal away some tech manufacturing from China. Their government is working closely with ODM’s who have manufacturing facilities in China, to help them expand older tech factories that have set idle for years or build new ones.

At the moment, Vietnam has mostly been known as a manufacturer of apparel and shoes. But that looks like it will change. Already, some of the top PC makers have started to move some of their final assemblies of products to Viet Nam to get around any current or future tariffs.

In the chart below, it shows that U.S Imports from Vietnam have surged by 38% with a total of $20.7 billion products shipped to the US from its ports.

This chart also shows that in China, imported goods from their ports are down by 12-8%, although at this time it is more related to the tariffs than any mass exodus of manufacturers.

But for China, any serious move of manufacturing out of China will have serious ramifications long term for their economy. One of the reasons China has had such a good economy is that it propped up their manufacturing programs and used it to help get a younger generation of agricultural workers into new cities and get them better jobs.

This program started over 20 years ago. At the time, much of China was still very agricultural, and kids who were born in this environment were making about $10 a week and still living with their parents. China knew that this could lead to serious unrest and put in place a program to get them off the farms and into low to mid-level skilled labor jobs, especially in factories.

China also wanted to expand certain cities to make them manufacturing hubs and give them special trade designations so that manufacturing firms outside of China would be enticed to invest in China and create new factories, especially for tech products.

This program worked exceptionally well. These factories recruited millions of kids from the farms and in most cases, gave them a wage of about $100 a week, instead of $10. Most lived in company subsidized dorms, so they ended up with real buying power. The longer they stayed working in the factories, the better it has been for them to learn and advance. Through this process, they have increased their wages, and this, in part, has helped them drive a middle class in China that has dramatically driven their economy.

As you can imagine, losing even 10%-20% of manufacturing to places outside of China will have an impact on their economy. This is why China has a lot to lose in this tariff war and has started a major push for the Chinese to buy mostly Chinese made products to help bolster its manufacturing base.

I am hearing that the Chinese Government is threatening to penalize manufacturers from Taiwan, Korea and other Asian regions who have factories in China if they begin shifting a lot of their customers work from their Chinese factories. This is the early stages of discussion within the Chinese government, but one could imagine if their manufacturing base is genuinely threatened, a move like this would be highly plausible.

In an article in the New York Times, NYT reporter Keith Bradsher wrote a good piece entitled “A China-US Trade Truce Could Enshrine a Global Economic Shift” in which he argues:

“even a fragile truce could have lingering implications. The United States would keep in place broad tariffs on Chinese goods for months or perhaps years to come. Global companies would almost certainly respond by continuing to shift at least the final stages of their supply chains out of China. Uprooting an entire supply chain is a nightmare task,” said Jon Cowley, an attorney in the Hong Kong office of Baker McKenzie, a global law firm, who advises corporate clients on tariffs and supply chains. “It takes years, if not decades.”

President Trump warned this past week that he was concerned about the influx of goods from Vietnam. The surge could invite scrutiny from the Trump administration if it believes that companies are pretending to make products outside China but are simply clipping together Chinese-made parts.

Still, China has few options to stop those shifts. Trade between the two countries is so lopsided that China has many fewer American imports to tax. It could slam American companies that sell vast amounts of products in China, like Apple or General Motors, but pinching those companies could hurt the Chinese workers who make those products.

As this article points out, many US manufacturers see only instability with China and the US and now believe that regardless of these tariff pauses, the handwriting is on the wall and they need to seriously consider moving the manufacturing of a significant portion of their US-bound products out of China, starting now. It will happen slowly, but I sense that the Chinese manufacturing boat has turned around, and more and more manufacturers will begin looking for new places to make their products in the future.

The Virtual Reality Inflection Point

As the decade comes to an end, we have seen the worlds biggest consumer technology product, the smartphone, pass its peak and face a worldwide decline. It is going to be a long time before we see another market as big as the smartphone market and as we wait a number of interesting technologies are being packaged together to solve current problems that will make future electronics possible. Virtual reality was once seen as a potential short term market opportunity, but after a slow start, we have still yet to see a true market for VR emerge. While there are a few variables required, I do think we could be on the cusp of VRs inflection point.

The market size for VR headsets is still up for debate. However, if we believe, as I do for now, that VR headsets are optimally positioned as a gaming platform primarily, then we can use some basic numbers for console sales of hardware that costs north of $200. Approximately 42 million video game consoles are sold each year and have an installed base north of 300m. Quality VR headsets will cost $199-$399 (like the Oculus Quest) which leads me to believe we are looking at similar market size in annual sales. If the VR headset market is, at a minimum, as large as the console market, then that is a sizable enough market to garner investment and innovation from many technology companies. It’s enough to keep the market interesting let’s put it that way.

But, like many hardware platforms, content is king, and this market will go nowhere if there is not a robust and innovate collection of entertainment experiences available and primarily video game content. With that caveat established, I’ve had several experiences with the Oculus Quest that have led me to more of a believer in VR than I was prior to using the Quest.

First off, with the hardware, the Oculus Quest confirms my conviction that VR headsets needed to be cordless standalone platforms in order to gain any meaningful market traction. This seems obvious, but it becomes even more obvious once you have tried a corded and then a cordless VR solution. The second is the price. We have been waiting for a stand-alone cordless VR headset to hit the market, and the Quest has nailed the price at $399. Lastly, the experience is nearly as good as any high-end VR experience I’ve had with expensive headsets tied to powerful GPUs. Which is impressive given it is running a several-year-old Qualcomm Snapdragon 835. The visuals are rich, with low-latency, and the display is much higher resolution than I expected. I can only imagine how much better this platform will be when it runs a more recent Qualcomm part like the Snapdragon 855.

While there is not as much of a robust library of games, there were two in particular that sold me on where VR can go.

Entirely New Gaming Experiences
At a high level, what I’ve always been watching for was for a developer to take advantage of the unique experience VR can enable and do something truly innovative, and unlike anything, we have experienced before. For me that was a game called Super Hot VR. This game can only exist on a VR platform, and that is what makes it so interesting. The concept is like a first person shooter/action game, but your physical movements dictate your strategy to move through each room. It is essentially a puzzle that involves fighting these red glass figures, and how fast or slow you move your body parts dictates the pace of the action. It is a remarkable experience. This game is a prime example of something you can only experience in VR and was one of the coolest games I’ve played in a long time.

The second is Vader Immortal. This game is the first of a series of episodic content where you are playing a role in the Star Wars universe. While there is not nearly enough lightsaber action, in my opinion, there is a mini-game component called Light Saber Dojo where you get to engage in as much lightsaber action as you like. The first time I picked up the lightsaber and ignited the blade, along with that awesome sound of a lightsaber turning on, I was giggling like a little boy.

This game led me to believe a few other things related to VR adoption. The first is Disney has the potential to blow this platform wide open. Nearly all of their Marvel franchises would make incredible VR games. On top of that, Star Wars experiences alone could drive VR into the mainstream. We are one Jedi game away from VR going mainstream.

If you think that sounds crazy, look at the exposure, as well as strong sales of Lenovo’s Jedi Challenges AR headset. This, mostly toy, was launched at a price of $199 and during the holiday season was nearly impossible to find. Even now, Tiger has a Star Wars Light Saber game that connects to a TV that sells for $129. A fully immersive, interactive, and well-produced Jedi game alone would cause an Oculus Quest at $399 to go gangbusters.

The big takeaway for me was how different quality game experiences on VR were compared to their PC or console counterparts. This the opportunity waiting to be unlocked by game publishers and if they create entirely new game experiences for VR platforms, then we will see this market develop and develop quickly.

The Apple Watch is a life saver for many, including myself

This week’s ThinkTank piece is a bit personal. It is about the role the Apple Watch has played in my health. In my case, it has become a critical monitor that helps me track my blood sugars in real time and gives me an essential tool in my quest to control my diabetes.

I use the Dexcom G 6 Continuous Blood Glucose Monitor that has a sensor that I wear that records my blood sugar readings all day long. It sends those readings via Bluetooth to the Dexcom iPhone app and then sends that reading to my Apple Watch. For 20+ years, I have had to prick my fingers up to four times a day to get blood sugar readings in order to determine my blood sugars and the amount of insulin I take. Now, I just look at my Apple watch to see what my blood sugar reading is on one of its screen complications. That means that I no longer prick my finger and now see my blood sugar readings on the Apple Watch on demand. That alone makes the Apple Watch one of the most important pieces of technology I own.

But an incident that happened recently has made the Apple Watch with its ECG monitoring and constant recording of my heartbeat also very valuable. About a month ago, I had an incident where I could feel my heart beat very fast. My normal heart rate is about 52. But in an instant, it jumped to 140 while sitting still. I could feel the heart beating faster, but Apple Watch showed me the graph of what was going on. For a period of about an hour, my heart rate stayed between 120-140 and charted the spikes in real time.

At the 140 beats per minute peak, I took an ECG reading on the Apple Watch and got an AFIB warning. While it did not confirm AFIB, it did suggest I immediately talk to my doctor and show him this reading. After an hour, the heart rate went back to 52 and has stayed there ever since.

But in looking at the heartbeat graphs, I discovered something that I had not seen before. Even though my average heart rate stays around 50-55, I could see spikes continuously where the heart rate jumped from 52 to about 65-75 often during any minute, and it was monitoring my heartbeat. This caused me real concern as I was not aware that I even had an irregular heartbeat.

I should also note that I had a triple bypass in 2012, so I am a very conscience in any changes in my heart health. Given my past heart health history and these new heart rate events, I made an appointment with my cardiologist to get some tests done to see what was going on.

BTW, after the heart rate jumped up considerably, it has not happened again. I also have taken an ECG reading on the Apple Watch bi-weekly, and it comes back normal, so my concern about AFIB had done down, but I still felt that I needed to be checked out by my cardiologist.

What is important for me and many others is that the Apple Watch has the ability to not only track health activities, but can also monitor health issues like heart health and diabetes. It gives users information about those conditions, and if there is something outside of a normal range and needs to be checked, it prompts a person to see their doctor.

Apple knew what they were doing when the created the Apple Watch and clearly decided to focus on health as a primary reason for it to exist. Yes, it can do much more, but its ability to help keep one healthy as well as alert you to health abnormalities, can’t be underestimated.

I did see my Dr and had multiple tests. The good news is that I do not have AFIB. However, the tests showed that I had palpitations, irregular heartbeat, and minor Tachycardia, which is related to the electric signals of the upper chamber of the heart. At the moment, these issues are all very mild and did not need anything major such as pacemakers, electric shock to get my heart rhythms in sync, or some other treatments that could have been invasive.

Instead, I have to watch my diet closely and increase my exercise, or these conditions could become worse over time if I don’t take care of my heart health. Because I have not had any additional high heart rates, my inclination was to pass it off as an anomaly. But the consistent monitoring of my heart beats by the app in the Apple watch pushed me to the Doctor to get it checked out.

Often we hear stories of how the Apple Watch or even some other fitness trackers have helped save lives. While we know these were shared by real people, they are mostly faceless individuals who we are glad for, but we do not have any personal connection to them.
If you have read my columns here and/or followed me as many of you have throughout my 38 careers in the industry, I hope that this story resonates with you. A product like an Apple Watch for me has become a lifesaver in its own right. That is why I am glad Apple created this important health monitoring technology that happens to be in a watch form that has great design and many health-related functions.

Ray Tracing Momentum Builds with Nvidia Launch

As a long-time PC industry observer, it’s been fascinating to watch the evolution in quality that computer graphics have gone through over the last several decades. From the early days of character-based graphics, through simple 8-bit color VGA resolution displays, to today’s 4K rendered images, the experience of using a PC has dramatically changed for the better thanks to these advances. The improvements in computer graphics aren’t just limited to PCs, however, as they’ve directly contributed to enhancements in game consoles, smartphones, TVs, and virtually every display-based device we interact with. The phenomenal success of gaming across all these platforms, for example, wouldn’t be anywhere near as impactful and wide-ranging if it weren’t for the stunning image quality that today’s game designers can now create.

Of course, these striking graphics are primarily due to graphics processing units (GPUs)—chips whose creation and advancement have enabled this revolution in display quality. Over the years, we’ve seen GPUs used to accelerate the creation of computerized images via a number of different methods including manipulating bitmaps, generating polygons, programmable shaders, and, most recently, calculating how rays of light bounce off of images in a scene to create realistic shadows and reflections—a technique referred to as ray tracing.

Ray tracing isn’t a new phenomenon—indeed, some of the earliest personal computers, such as the Amiga, were famous for being able to generate what—at the time—felt like very realistic looking images made entirely on a PC via ray tracing. Back then, however, it could often take hours to complete a single image because of the enormous amount of computing power necessary to create the scene. Today, we’re starting to see the first implementations of real-time ray tracing, where GPUs are able to generate extremely complex images at the fast frame rates necessary for compelling game play.

Nvidia kicked off the real-time, PC-based ray tracing movement with the debut of their Turing GPU architecture and the RTX 2000 series graphics cards based on those GPUs last year. Now the company is working to push the momentum forward with their second-generation desktop graphics cards, the RTX Super line, including the RTX Super 2060, RTX Super 2070, and RTX Super 2080. All three cards offer performance improvements in both ray tracing and traditional graphics acceleration. At the high end ($999), the RTX 2080 TI remains as the highest performing card in the Nvidia line, while at the low end ($349), the original RTX 2060 remains as the lowest priced option. In between, the original 2070 and 2080 are being replaced by their Super versions (but at the same $499 and $699 prices), while the Super 2060 at $399, ups the onboard graphics memory to 8 GB and nearly matches the performance of the original RTX 2070. As a bonus, all three RTX Super cards come bundled with two games that support real-time ray tracing: Control and Wolfenstein: Youngblood.

Nvidia faced some criticism (and, reportedly, saw somewhat muted sales) after the launch of the first generation RTX cards because of the limited support for real-time ray tracing in many popular PC gaming titles. Since then, the major gaming engines, including Unreal and Unity announced support for ray tracing, as well as Microsoft’s Direct X Ray Tracing (DXR) API, and several AAA gaming titles, including Cyberpunk 2077 and Call of Duty: Modern Warfare. In addition, other games, such as Quake II RTX and Bloodhound have also announced support for accelerated ray tracing hardware.

On top of this, recent announcements from both Microsoft (Project Scarlett) and Sony (PlayStation V) made it clear that the next generation of game consoles (expected in 2020) will incorporate hardware-based support of real-time ray tracing as well. Interestingly, both of those devices will be powered by AMD-designed GPUs, strongly suggesting that AMD will be bringing real-time ray tracing hardware technology to future generations of their Radeon line of desktop and laptop GPUs.

As the market has demonstrated, not everybody currently feels the need to purchase GPUs with dedicated ray tracing accelerated hardware. Many gamers focus on purchasing desktop graphics cards (or gaming laptops) that can play the current titles they’re interested in at the fastest possible frame rates and the highest possible screen resolutions at price points they can afford. For those gamers who are thinking ahead, however, it’s clear that there’s a great deal of momentum starting to build around real-time ray tracing. In addition to the previous examples, both Nvidia and AMD have announced software-based support of ray tracing in the latest drivers for their existing GPUs, which will likely encourage more game developers to add support for the technology in their next generation games. While the software-based solutions won’t run as fast, nor provide the same level of image quality for ray traced effects as hardware accelerated solutions, they will at least make people more aware of the kind of graphics enhancements that ray tracing can provide.

The evolution of computer graphics is still clearly moving ahead and, as a long-time industry -watcher, it’s great to see the once far-off concept of real-time ray tracing finally come to life.

The Disproportionate Focus on ‘Wireless Competition’

The road to T-Mobile’s acquisition of Sprint being approved just became a little bumpier this week, with news that additional states have joined the effort to block the deal. This has been a roller-coaster, overly politicized process that’s now dragged on for more than a year, with no clear timeframe for a resolution in sight. In a recent column for Fierce Wireless, I reiterated the three ‘big picture’ reasons why the deal should be approved: better for 5G, better for broadband competition, and better for competition in the enterprise segment.

In this column, I’d like to briefly explore another angle…which is a continued curiosity about why there is so much focus on the level of competition in wireless, given that so many other industries in communications, digital media, and the internet are far less competitive than even a three player (plus MVNO/reseller) wireless market would be.

Closest to home is broadband. This market remains a monopoly in half the county and a duopoly at best, with high prices and middling average speeds compared to other ‘peer’ countries. Now, let’s look at other sectors of the telecom space. In telecom equipment, there are three suppliers who control more than 90% of the market. And with Huawei largely shut out of the U.S. market (and a growing number of other markets), there are two players supplying the essential equipment for 4G and 5G networks. Other telecom ‘sectors’ with three or fewer players owning 90% of the market include: enterprise Wi-Fi, OSS & BSS, and towers, to name a few. In smartphones, it’s largely a two player world in much of the world, with Apple and Samsung owning some 90% of the industry profits. And on smartphone OSs, it’s an iOS and Android world (with Android controlling 70%+ share in many countries).

How about some related Internet and digital media sectors? With all the consolidation in the media space, we now have Disney owning 40% of the global box office, with Fox folded in. Digital Advertising? Nearly 60% is Google and Facebook, with a long tail of pretty large companies (Amazon, Microsoft, Verizon) fighting for scraps. Satellite TV and Satellite Radio are both two player markets. Some 70% of the streaming music market is owned by Spotify, Apple, and Amazon. Public cloud? Amazon and Microsoft control 70% of that market, and the share of ‘Other’ has dropped from 52% to 16% between 2016 and 2019.  And lest you think that the Internet Travel business is competitive, know that Booking Holdings owns Booking.com, Kayak, and Priceline, while Expedia Group owns Expedia.com, Hotwire, Hotels.com, Trivago, and Travelocity. Online ticketing: Live Nation/Ticketmaster has 85%+ share of the market.

Some other examples in major sectors of the digital universe:

  • Search: Google has 90% share of the search engine market, worldwide
  • E-Commerce: Amazon has 35% share of all e-commerce, and more than 50% in seven major categories (books, toys & games, baby products, etc.)
  • Online Maps: Google Maps has 154 million monthly users, while Waze (owned by Google) and Apple are next at ~25 million
  • Social: The Facebook universe Facebook+Messenger+ Instagram+WhatsApp are 4 of the top 5 in global MAUs.

I’m sure there are additional examples, but the above probably makes the point.

Getting back to wireless, there are three additional arguments that favor the move from four competitors to three:

  • Capital intensity. Wireless operators spend some 20% of their revenues on capital expenditures. This is much higher than nearly any related sector. And that certainly is not going away with what will be required for the 5G buildout.
  • Logistics. Much of the 5G build, especially in the higher, mmWave bands, will require the deployment of enormous numbers of small cells. The sheer ability to get that number of cells approved and placed in municipalities would make a four player market a logistical quagmire, with unneeded duplication of facilities.
  • Market is more competitive than you think. In addition to the four facilities-based players, there is still a healthy MVNO/resale market, with TracFone (20m+ subs across numerous ‘sub-brands’), Metro PCS and Cricket among the largest. Plus, DISH holds enough spectrum to become a fourth facilities-based competitor, if it so chose to actually deploy that spectrum rather than just sitting on it.

I’ve argued for a year that those opposing the T-Mobile deal have been looking at it through the wrong lens. Then, add the ‘proportionality’ argument above, which shows that other industry sectors are far more less competitive than the wireless market. All this makes me puzzled as to why there’s been this outsized, expensive, and delay-inducing opposition to opposition to a deal that would make the U.S. wireless market structure look like that of most other developed countries.

Apple’s Intersection of Design and Operations

Over the weekend a hard-hitting piece on Apple and some of the frustrations from Jony Ive was published in the Wall St. Journal. I’m sure many of you have read it by now, but in case you haven’t, I encourage you to read it. This article contains a great deal of ammo for Apple’s favorite critics. From the comments from the peanut gallery, mostly on Twitter, you would be led to believe that Apple won’t design another great product ever again. Everyone seems to be honing in on the most popular criticism about Apple’s management to date, which is that they are increasingly focusing more on operations than product design.

This criticism, while somewhat accurate since Apple has needed world-class operational focus in order to scale to meet the demand for their products, misses the point when distilled to a statement that their only focus is operations.

Design Friction
Talk to anyone who has ever designed a piece of hardware, and you will hear them lament about the trade-offs they had to make so that their wonderful creation could exist. Within hardware design, there are two truths, the hardware is hard, and designing something that can be mass manufactured with traditional tools is even harder. I’ve worked with plenty of hardware startups who bring brilliant creatives into the design process only to have those some creatives frustrated because their design vision could not be manufactured at scale.

Through Apple’s manufacturing history, I had heard stories of Apple’s execs, and even Jony, going to China and collaborating with companies like Foxconn in order to help them troubleshoot some manufacturing processes they were having trouble mass producing related to design textures, or other facets of making Apple products. I’ve similarly heard from manufacturing companies that no other company than Apple brings them more complex design challenges they have to solve to mass produce things Apple creates. There is a fine balance between design and scaling manufacturing, and I can imagine, at times, that process can be frustrating for creative visionaries like Jony Ive.

Fusing Design and Operations
Ive has had to operate in this world for a long time and was fully aware of the challenge. In fact, I’m certain he himself considered this a challenge. It’s hard to argue that when it comes to overall aesthetics, material design, colors, textures, etc., that Apple sets the bar but what gets underappreciated is how unprecedented their scale is with such complex designs. Jony Ive conquered the realm of creating incredible designs that can be mass manufactured, but he again would fully understand the tradeoffs. Having conquered this challenge, I am not surprised he is interested in a new challenge.

The Apple which has emerged, is one that is now blending design and operations in a way no other company is. It’s easy to look from the outside and say they are purely operations, but that does not give enough credit to the process that even Apple’s top management is not also deeply interested in the product parts of the business as well.

Many of Apple’s critics are purely nostalgic. Wanting Apple to go back to the days when some of the designs were more bold, iconic, possibly polarizing, but in that time Apple was selling tens of millions of products not hundreds of millions of products. This is a crucially important point that many in the public sphere miss.

For Apple to continue on their path as one of the biggest companies in the world, and one of the biggest hardware-centric companies in the world, they will need to keep blazing trails down this fusion of operations and design. As I wrote in my article on Friday, some of Apple’s most interesting designs are still ahead of them as the bar to compete with technology we wear will be on a vastly different plane than that of things that sit on our desks or in our pockets and bags. That challenge is now mostly in the hands of the team Jony built and groomed for such a task.

Apple Post Jony Ive

I’m sure by now you have heard or seen the news that Jony Ive is officially leaving Apple. It was the biggest story in tech yesterday and would have been hard to miss. Analysis of this news has been all over the place. To the predictable Apple is doomed, and its design culture is gone, to Jony wasn’t perfect, and it’s time for others to shine. Without analyzing every nuance, there are a few points I wanted to make regarding this news.

Leaving on His Terms. As someone who watches Apple very closely and has for over two decades, it seems to me, this transition was a long time coming. In 2015, Ive had already given more of the day to day design work to other members of his team, and I think the most important point here is that Jony Ive built a team he trusted with carrying the design of Apple’s products. Ive likely had a range of oversight but he has not been driving Apple’s design ambitions for some time now.

Reporters I spoke to yesterday after the news broke asked if I felt much would change and my answer was no. Jony Ive would not have left Apple if there was not a team in place he trusted to carry the essence of Apple design forward. Obviously, as a part of this transition, Ive will still be around and, according to Apple’s statement, Apple will be a primary client of his new design firm.

At a minimum, things will stay the same. However, the upside potential may be even greater with Jony not at the helm.

A Fresh Start
I know this is hard for many to comprehend at the moment but if Apple is to go on for another 100 years or more, their key execs who helped make Apple what is today will not be around forever. I’ve always tried to explain how Apple has a culture, and an ethos, that has to be preserved. If Apple the company was Steve Jobs most important product as many argue, then it is essential that product stays true to itself even in the years when senior leadership transitions the company to new leaders. Ultimately, this is true for Apple, and it was unfortunately prematurely tested with Steve Jobs passing. Jony’s departure and the transition will be the second test for Apple.

My colleague Carolina Milanesi tweeted a point I thought was compelling.

It is interesting to take the view that now is the time for the team Jony built to shine and to do so without having to be in his shadow and can now chart their own path forward.

Jeff Williams is More of a Product Guy Than We Knew
One of the more interesting bits of information that came out in reports following the news of Ive leaving Apple was that Jeff Williams, Apple’s COO, is actually much more involved in product design than we initially knew. We knew the Apple Watch was his baby, but the extent he was “hands on” with the design process of that product was unclear.

This new information is actually quite positive. Many’s worries with Apple was that it was becoming more of an operations company and that its design culture was being lost. If Jeff is truly more of a product guy, but also an exceptional operations guy than this is quite positive in my view as he is best of both worlds.

The promotion of Sabih Khan to Senior Vice President of Operations is also being positioned as a way to let Jeff Williams also focus more on design related to Apple products as well. This part will be one of the more interesting to watch and see how much of a mark Jeff can make as he influences design, perhaps, even more, going forward.

Apple’s Most Challenging Product Designs are Still Ahead
Many of Apple’s products under Jony Ive were icons in their own right. Things that stood out and gave their owners a sense of pride to own and look at. Going forward, as we enter the realm of wearable computers, the design will be even more at the center of the challenge if we want humans to embrace them. And this goes beyond just how things like AirPods, and future ear worn computers, or smart glasses look but also how they feel. The latter being something new to the equation but one Apple has succeeded with when it comes to Watch and AirPods.

Glasses may end up being the real design challenge, and while Jony will still around and having some oversight, new wearable products will be led by the new design organization and will be judged accordingly.

I’m optimistic a new era of design at Apple is possible, and hopefully chart an even more fruitful path forward.

What Bill Gates’ Mea Culpa Says About Microsoft

This week, in an interview at venture firm Village Global, Bill Gates admitted that his biggest mistake was not to empower Windows to become what Android is today. More specifically, he said:

“In the software world, particularly for platforms, these are winner-take-all markets. So, the greatest mistake ever is whatever mismanagement I engaged in that caused Microsoft not to be what Android is. That is, Android is the standard non-Apple phone platform. That was a natural thing for Microsoft to win. It really is winner take all. If you’re there with half as many apps or 90 percent as many apps, you’re on your way to complete doom. There’s room for exactly one non-Apple operating system, and what’s that worth? $400 billion that would be transferred from company G to company M.”

This is the first time Gates takes responsibility for not doing what was needed to be where Android is today. Over the years, the misstep was always associated with CEO Steve Ballmer and his dismissal of the impact that Apple’s iPhone will have on mobile computing. Hence why the most common commentary on this topic has always been that Microsoft missed mobile. They misjudged the importance that mobile phones will have in taking time away from PCs.

What transpires from this week’s comments is both a sharing of responsibility by Bill Gates, but most importantly, in my view, an admission to missing the opportunity to monetize from consumers not missing mobile.

Missing the Forest for the Trees

Back in 2008, Microsoft’s revenue was still highly dependent on software license sales, as a letter to shareholders clearly outlines.

“Fiscal 2008 was a successful year for Microsoft that saw the company deliver outstanding financial results, introduce significant innovations across the breadth of our product portfolio, and make key investments that position the company for strong future growth.
Thanks to the continued success of our core Windows and Office businesses, and double-digit growth in all of our business groups, revenue jumped to $60.4 billion in fiscal 2008, an increase of 18 percent compared with the previous fiscal year.
Throughout fiscal 2008 we saw strong adoption of Windows Vista, which has sold more than 180 million licenses, and the 2007 Microsoft Office system, which has sold more than 120 million licenses. Microsoft Office SharePoint Server 2007 passed the 100 million mark for licenses sold and recorded more than $1 billion in revenue.”

Microsoft’s performance was linked primarily to the enterprise market and only indirectly to the consumer market. What I mean by this is that PC buyers were buying hardware that was running Windows they were not buying Windows. As a result, Microsoft saw consumers only as a dotted line to a license fee rather than a clear target audience.

It wasn’t Natural

With the rise of the iPhone and Android, Microsoft did not look at mobile in a conceptually different way from PCs. Mobile was just another “channel” for its license and software business. It certainly did not represent a new opportunity to rethink engagement with consumers so that “Windows was not just something they used, but something they loved” as Nadella said many years later. So, being where Android is today was not as natural as Bill Gates makes it sound because the battle strategy was fundamentally flawed.

Google had the foresight to appreciate the real impact that mobile will have on its core business and had the advantage of having a core business that was already centered on consumers. Going from Android as a vehicle for search and advertising to Android as a platform for all services was a natural progression for Google. If you look back at the initial priorities Google had with Android, it was clear that the goal was different than what Apple was doing with Apps. An app store was needed to compete with iOS, but it was not seen as a serious source of future revenue. Google services on phones provided that source through the engagement they drove. Engagement that in turn, benefitted the core business of search and advertising.

Restarting the Race

These reflections on past pivotal moments are very timely. In Cloud and AI, Google and Microsoft were let loose again after the safety car moved out of the way in the race. Both companies are addressing the enterprise with Cloud and AI, and Google is clearly keeping its investment in the consumer market albeit trying to distance the two so that it is clear the business models in these two areas are different.

What we have not seen with enough clarity is how Microsoft will use Cloud and AI to focus on consumers. Of course, there is Office 365, Surface and Xbox that are all relevant to consumers as well as enterprise. But I believe there is a much broader role Microsoft could play as the boundaries between work and home become more blurred. For more and more users, the devices, software, and services they use at work are also those they turn to in their private life. This means that there is a significant opportunity to use cloud and AI to make my overall experience better, use my data across the board to drive more value to me without indirectly monetizing from me. I would actually argue that done right, this value of added intelligence and data protection and privacy could provide a source of direct revenue in itself. Apple certainly believes that and as their core business is hardware and services that is where they aim to monetize.

Pondering on the whats and ifs of winning mobile seem somewhat irrelevant at a time when there are so many more technology touchpoints in our life. It also misses the point that the real target was winning consumers. Leveraging existing mobile platforms today to create synergies with the parts of the ecosystem, Microsoft controls could be beneficial enough to the business in itself. But to harvest such an opportunity, Microsoft must do something that seems to be more natural to them now than it ever was in the past: taking a human-centric approach whether that human is at the office or home.

New Designs Will Redefine the World of Portable Computing

I began covering the PC industry in 1981 and was one of the first professional analysts to study and chronicle the PC market. Over 38 years, the PC industry has produced close to $3 trillion in revenue and created a lot of wealth and jobs for people who create PC’s, PC software, and services that support them.

Today, the majority of personal computers sold are laptops and notebooks. While desktop computers are still made, they represent only about 20% of all PC’s shipped today. The real PC workhorses that fuel a much more mobile business lifestyle are notebooks and laptops that drive today’s productivity, education, entertainment, and social media applications.

I have watched the evolution of the laptop very closely over these 38 years. In fact, I was at CEBIT in 1985 when Toshiba introduced the first ever clamshell laptop, a design that the PC Industry embraced and has popularized for over three decades.
What is ironic about the clamshell design is that until 2012, there was very little innovation in terms of design changes to that form factor.

The first break with traditional clamshells came in 2012 with the introduction of what Intel called “2 in 1’s.” These were fundamentally a tablet with a detachable keyboard. Wired called them “lapelets” at the time, and some called them “hybrids.”

Being able to break the stronghold of the clamshell design was partly due to Microsoft’s newest OS that added a pen and touch support and other features that came out to support their first Surface hybrids in this same year.

One could argue that Apple forced this design revolution with the introduction of the iPad in 2010, that also included a detachable keyboard and a touch UI, but its focus was on being a tablet, not a laptop replacement like the 2 in 1’s were from the beginning.

Since the 2 in 1’s emerged, there has been a lot of experimentation in the area of portable computing. We have seen dozens of hybrids and 2 in 1’s in many form factors and designs. Laptops have also become thinner and lighter. However, these types of mobile computers have not really caught on. They represent no more 10-15% of all laptops and notebooks sold today.

If you take a historical look at the trends in portable computers, from 1985 to 2012 would be called the clamshell era. From 2012-2020, could be seen as the hybrid era. Now, as we are about to enter a new decade, we are about to see what one might call a “flexible Era” of mobile computing as the advances in technology components are accelerating. Over the next decade, mobile computer makers will have a host of new technologies to work with, from new battery chemistry that could power a laptop for a week, to new low voltage semiconductors that have enough power to deliver 3D holographic images to mobile screens. Portable computers will handle AR and VR user interfaces and applications and, work with glasses that could transform the mobile computing experience altogether.

And over the next three years, we should see a perfecting of foldable screens that could be used in laptops as well as smartphones.
In early example of a foldable laptop was introduced by Lenovo a few weeks back. Tentatively named the ThinkPad X1 Foldable, it sports a 13 “ screen that folds in half.

Lenovo showed this to me recently, and I got to test it out, and while it is still a prototype, it is well designed, and they solved one of the biggest problems with any foldable devices. They have developed patented hinges that move with the fold and makes it possible for the screen seems to stay in place no matter how many times you fold it during its life. The quality of this device is excellent since it was designed by Lenovo’s Yamato team that created the ThinkPad line of laptops.

There is no date for its release yet, and most other laptop vendors are working on similar models that could debut at CES in January.

While the folding screens themselves are still a work in progress and may take a few years to perfect its manufacturing process, Lenovo has given us a glimpse of the future of portable computing and which, along with the new advances in technology mentioned above, could make the next decade the flexible era of portable computing.

Consumer Influence on Enterprise Software

There is a convergence of consumer-driven user experiences happening within the world of enterprise software. This trend should not be shocking; what should be surprising is how long it took. There are examples all over the place but with Slack having a direct listing IPO last week, and Zoom’s earlier in the year, both these companies are the poster child for this shift in how enterprise software companies must embrace.

The dawn of the consumer era of computing rose with the dawn of the smartphone era. The consumer-centric software and services experiences smartphones drove are the culprit for this influence on enterprise software. Humans expectations changed as the bar was set higher with rapid innovation in software on mobile devices. Thanks to shiny, engaging, and user-friendly smartphone apps, consumers would never look at software the same again. That bar entered the tools they used at work. And their enlightened view on better software experiences helped them see a better way around better user experiences and led to them to be critical and frustrated with the complex tools most enterprises shove down their worker’s throats.

This movement is at the heart of the term many use called digital and workplace transformation. Core among this trend is consumer friendly user experiences for all things humans in the enterprise touch, but chief among them are the software and services they use as a part of their primary job function. I’ve seen multiple investment firm research studies suggesting that user experience focused enterprise software could lead to an incremental $8-15 billion dollars a year in software sales. Whether enterprise IT managers like it or not, user experience sits at the center of all human software interactions and with that comes great benefit to the enterprise.

On this topic, I found a few data points relevant from a recent study from Sales Force.

– 67% of customers say their standard for good experiences are higher than ever
– 51% of customers say most companies fall short of their expectations for great experiences
– 72% of customers say they share good experiences with others

This data validates the high bar that humans now have when it comes to software and services. With 72% saying they share good experiences with others, it makes sense how things like Slack or Zoom were able to grow within an enterprise even without IT approval. Now IT managers, as a part of workplace transformation, are offering teams a menu of software and services options to use when it comes to productivity and collaboration software and CRM software.

The main thing you will hear from IT managers about this trend is they are doing it mostly for retention. The reality is many job markets are hot and a lot of attractive talent, especially younger talent, will simply not tolerate old world workflows and painfully designed enterprise software. It has to be easy to use, and more importantly, it has to fit the workflows of the digital and connected generation. But, if this is done right, it leads to not just higher retention but also a more engaged and more productive workforce.

Box CEO Aaron Levie has been quite active on Twitter in the days post-Slack’s listing. This recent tweet caught my eye, and I thought was worth sharing.

This is an interesting claim but has a great deal of magnitude if true, and I think it is true. His statement highlights how the influence of user experience design and understanding is influencing enterprise software. If the hunger for this was not immediately apparent, look back to the original launch of Apple’s iWork. That moment, for me, was when all of this became clear. Apple is, by nature, one that designs software with user experience at the center of their ambition. And this culture, which is the intersection of liberal arts and technology is how Apple makes great software experiences. At the launch of iWork, we witnessed Apple demonstrate what enterprise software, specifically productivity software, should look and feel like. Apple showed us how easy creating spreadsheets, presentations, and documents should be. I remember seeing that and thinking, this is exactly what Microsoft should have done with Office.

Sadly, iWork never caught the workplace by storm, and now–finally–Microsoft has made great strides in making their software more consumer friendly and easier to use. Microsoft is also at the center of taking this trend even farther by integrating AI into Office and bringing new intelligent helpfulness to aid customers in getting more done in a shorter amount of time, without hassle or complexity.

With iWork, Apple gave a glimpse of a better way forward for enterprise software design and everyone who saw that viewed it as refreshing compared to the tools they used in their day to day workflows at the time. The industry still has a long way to go, but moving forward on this trend line is clearly the direction enterprise software is headed.

AT&T Shape Event Highlights 5G Promise and Perils

OK, let’s get this part out of the way first. In the right conditions, 5G is fast—really fast. Like 1.8 Gbps download speed fast. To put that into perspective, we’re talking 5-10x faster than even the fastest home WiFi, and more than 50x faster than a lot of the typical 25-35 Mbps download speeds most people experience with their day-to-day 4G LTE connections.

The catch is, however, that the “right conditions” are rarely going to be available. At AT&T’s recent Shape Expo event on the Warner Bros. studio lot in Burbank CA, I did actually see just over 1.8 Gbps on a speed test using Samsung’s brand new S10 5G phone when I stood 75 feet away from a tiny cell tower installed as part of a new 5G network on the lot and pointed the phone directly at it. Impressive, to be sure.

However, when I turned away and walked another 50 feet from the tower and held the phone in my hand as you normally would (and not in direct sight of the special 5G antenna that was part of the network), the speed dropped to just under 150 Mbps because the connection switched over to LTE. Now, that’s still nothing to shake a stick at, but it’s more than 10x slower than the fastest connection. This succinctly highlights some of the challenges that 5G early adopters will likely face.

To understand the dilemma, you need to know a bit more about how 5G networks work. First, the good news is that 5G builds on top of existing 4G LTE networks, and whenever 5G signals aren’t there, smartphones and other devices with cellular modem connections (such as wireless broadband access points—often nicknamed “pucks” because they look a bit like hockey pucks) fall back to 4G. Plus, as my experiment showed, it’s often a very good 4G connection, because any phone with 5G also typically has the most modern 4G modems. Similarly, locations that have 5G networks usually have the most current 4G technology installed as part of the network as well. Together, that combination typically means that you’ll get the best 4G network connection you can—to put it numerically, it can be as much as 5x faster than the typical LTE speeds many people experience today.

Within the 5G world, there are two basic types of connections that leverage two different types of radio frequencies to deliver the signals from cellular networks to devices: millimeter wave and what’s termed “sub-6”—short for sub, or below, 6 GHz. Millimeter wave signals (so named because their wavelengths are about 1 millimeter in length) are extremely fast, but they don’t travel far and demand a direct line-of-sight connection. Like Verizon and T-Mobile, AT&T’s initial implementation of 5G networks use millimeter wave technology and the new Samsung S10 5G supports that as well.

So, back to my original test, I was only able to see the crazy-fast 1.8 Gbps download speeds when the phone was within the short range and direct line-of-sight of the 5G tower, which was transmitting millimeter waves at 39 GHz (which happens to be one of the frequency bands that AT&T controls). As soon as I moved a bit away and that connection was lost, both the phone and network connection fell back to 4G LTE—albeit the latest LTE Advanced Pro version of 4G (which AT&T confusingly calls 5Ge, or 5G Evolution). In other words, to really enjoy the full benefits of 5G speed and millimeter wave technology, carriers like AT&T are going to have to install a lot (!) of 5G millimeter wave-capable technology. Thankfully, 5G-specific antennas can be added to existing 4G towers and 5G smalls cells take up much less space than typical cellular network infrastructure components, but there’s still going to have to be a lot more independent 5G cell sites to fully leverage 5G.

Later down the road for AT&T, Verizon and T-Mobile (but in the forthcoming initial implementation of 5G from Sprint), you’ll be able to access the “other” kind of 5G frequencies, collectively referred to as “sub-6”. The sub-6 frequencies can all travel farther than millimeter wave and don’t require line-of-sight, so they can work in a lot more places (including inside buildings). However, they’re also much slower than millimeter wave. As a result, the “sub-6” 5G options will enable much wider coverage but won’t really be significantly faster than many 4G LTE networks. (FYI, all existing 4G radio connections occur below 6 GHz as well, in fact, below 3 Ghz, but they use different methods for connections and different types of radio frequency modulations than 5G.) Practically speaking, this means it will be easier to build out better coverage networks with “sub-6” 5G, but at the expense of speed. It’s a classic engineering tradeoff.

Of course, there’s more to 5G than just speed and some of that potential for future 5G applications was also on display at the AT&T Shape Event. Most notably, reductions in latency, or lag time, can start to enable much better, and more compelling implementations of cloud-based gaming over mobile network connections. Nvidia, for example, showed off a lag-free 5G connected version of its GeForce Now cloud gaming service, which allows you to have a high-end desktop gaming experience powered by Nvidia graphics chips even on older PCs or laptops. In addition, several vendors started talking about delivering higher-quality video and graphics to AR and VR headsets courtesy of future 5G products.

There’s no question that 5G can and will make a large impact on many markets over time. But as these real-world experiences demonstrate, it’s a complicated story that’s going to take several years to really show off its full potential.