What You might have missed: Week of August 4th, 2017Reading Time: 4 minutes
This article is exclusively for subscribers to the Think.Tank.
This article is exclusively for subscribers to the Think.Tank.
Consumer uptake of virtual reality might be taking longer than some pundits expected, but the technology is finding robust traction on the commercial side of things. In fact, IDC recently published a Worldwide AR/VR Spending Guide report that predicts commercial spending on hardware, software, and services related to virtual reality will surpass consumer spending on the technology this year. What makes this particularly interesting is that this commercial growth is taking off despite the dearth of commercial-focused hardware in the market.
Strong Uptake Across Numerous Verticals
May of the challenges faces VR in the consumer market, such as the high cost of hardware, the complexity of setup, and the lack of mainstream content, aren’t major issues when it comes to commercial deployments of the technology. And across many different verticals and use cases, the benefits are obvious, and the potential return on investment is clear. IDC’s research on the topic to date has explored VR in 12 different industries and across 26 different use cases. And remember: it is still early days.
Some of the most compelling industry use cases include:
Education/Knowledge Transfer: From training firefighter to soldiers to educating engineers and school kids, VR is going to drive dramatic shifts in how people learn in the future. In the first scenario, people receive training in situations too dangerous and expensive to simulate in the real world. In the second, students gain access to brand new ways of interacting and absorbing information that is less passive and more active.
Manufacturing: VR is already taking off in both process and discrete manufacturing. The use cases are as varied as the collaborative, iterative process of creating products, to the training of engineers and others on how to run massive and complex manufacturing lines. The potential for VR disrupt age-old manufacturing processes—especially when combined with 3D printing—is massive.
Growth Despite Key Challenges
IDC has forecast robust growth in all the above areas, as well as a long list of others. And this growth is occurring even though a great deal of the early work here is happening on the consumer-grade hardware that’s available in the market today. Suffice to say, products designed for use by consumers aren’t rugged enough for long-term deployments in commercial settings. This lack of commercial-focused VR hardware is a clear market need the industry has failed to address so far.
Later this year I expect the launch of standalone VR products—based on reference designs from Intel and Qualcomm—to gain more traction in commercial than in consumer. While most consumers have limited need of a VR-only device, companies looking to deploy VR will find the simplicity quite appealing, especially after vendors start building robust, commercial-grade versions.
As the number of hardware options increase and more commercial-centric designs hit the market, the software and services associated with the technology will improve, too. We should also start to see the emergence of more VR standards, which will be key for long-term growth. And in the span of a few short years VR will be quite well entrenched in many of these vertical markets. This represents a large opportunity for the technology companies that service these markets, and an outsized threat to those industry verticals that fail to embrace the technology in a timely manner.
As we’re nearing the end of earnings season, one of the things that’s become increasingly clear is that the big companies have mostly performed consistently with their past performance, delivering strong growth and profits. Meanwhile, smaller companies have struggled to find growth and profitability, often losing share to their bigger competitors. The recurring theme for me has been that the dominant become ever more dominant, while the smaller players continue to struggle to break in and cross over to the other side of what’s increasingly looking like a chasm.
On the big side, we have the giants of the consumer tech industry, as measured by revenue or by influence, all of which have now reported their results:
With one exception, these are highly profitable companies, and with one exception again, they grew by double digits year on year. Beyond the mere financials, though, these companies individually or in pairs or trios dominate key markets:
I could go on, but you get the picture: these big, successful companies are only becoming bigger and more successful, and more dominant in the various markets where they compete.
Not all of the smaller consumer tech companies have reported yet, but we have enough of a picture from those that have, and from past earnings from those that haven’t, to know what we’ll end up with:
Now, some of this is down to company lifecycles, with both Twitter and Snap still to generate any profits at all in any quarter, while Fitbit and GoPro have been profitable and high-growth companies in the past but have run into trouble. The lower-tier smartphone vendors, meanwhile, have always struggled in markets that offer little differentiation and intense competition and which heavily reward scale and premium offerings.
In an earlier piece, I wrote about the danger of being a one-trick pony in the tech industry, with both Fitbit and GoPro among the examples I cited. And that remains a key issue for these companies, many of which are single-product companies and have failed to build broader platforms and ecosystems that can attract consumers and differentiate against powerful competitors.
But the barriers to success go well beyond that. Many of the largest players in the industry enjoy significant network effects and scale which enable them to quickly ramp up new products and services by selling them to massive installed bases of devices or regular users. I wrote about the power of Amazon’s Prime in this regard last week, but Facebook is another great example. If Facebook feels threatened by a new app or feature offered by a rival, all it has to do is copy it and make it available to its own massive user base of 2 billion monthly active users or the smaller but still substantial WhatsApp, Messenger, and Instagram user bases. The rise of Instagram Stories to 250 million daily active users over the past year, eclipsing Snapchat’s 166 million daily active users as of the end of Q1 2017, is perhaps the perfect example of this.
On the rare occasions when companies and products do manage to break through these barriers and create real differentiated value, they’re often simply acquired by the bigger players. WhatsApp, Instagram, LinkedIn, DeepMind, and others are among the list of companies which had created interesting businesses or technologies outside of the big tech companies and yet have now ended up being absorbed by them.
For all these reasons, it’s almost impossible to cite an example of a consumer technology company that’s emerged in the last few years and achieved real financial success independently of and despite the dominance of the big tech companies. Of those that have tried, the vast majority have run up against the power of ecosystems, been cloned and eclipsed by the big companies, created markets which ended up dominated by larger players, or been acquired.
I’m far from convinced, as some are, that this means regulators should start looking at these companies on antitrust grounds, mostly because I don’t think they’re doing anything illegal. But we are going to see calls for regulatory intervention, especially in Europe and other markets outside the US, and we’re going to see an increasing backlash against these dominant players from consumer groups, would-be competitors, and politicians. Dealing effectively with these complaints and threats is going to be an important skill for these companies over the coming years, even as they begin to feel more and more invincible.
By my estimates, Fitbit and Apple Watch combined for nearly 50% of all wearable shipments in the June quarter of 2017. The market is a duopoly and major questions about Fitbit remain. Fitbit’s ASP was just over $100 for the quarter while the Apple Watch was just over $300 in our model. However, Fitbit and Apple Watch are on two very different trajectories.
Two of the big buzzwords in tech these days are Artificial Intelligence and Machine Learning. Those who understand these technologies know that together they will have a dramatic impact on pretty much everything it is applied to in the future.
But for consumers, AI and ML are still a real mystery. Because of various movies that highlight AI like characters who are mostly villains, the broad consumer top-of-mind paints AI in a very negative light. And with Elon Musk and Bill Gates warning us that AI could be very dangerous in the future, you can see why consumers are confused and even scared of this type of technology today.
The latest Microsoft earnings results were a stark reminder that the consumer market makes only a marginal contribution to the overall revenue. Many believe consumers are not a priority for Microsoft and struggle therefore to understand the role of the Microsoft stores. Microsoft should admit they were an experiment. An experiment that failed and that it’s time to close them.
I believe it would be a mistake.
I also believe Microsoft does care about consumers; it just struggles to show it, especially when it comes to apps and services.
Microsoft is the exact opposite of Apple in the balance between enterprise and consumer. Apple goes out of its way not to come across as an enterprise company while Microsoft goes out of its way to always put enterprises first. In reality, both companies care about both markets and, more importantly, both companies need both markets!
When it comes to their retail presence, the two companies share similar goals. While it is not something Microsoft would admit to, creating an Apple store experience was the goal when they first opened their stores. Any tech company looking to have a retail presence should have Apple as a benchmark.
Aside from the short period when John Browett ran Apple’s retail business, Apple’s stores have always been about using great customer care to enhance brand loyalty. Apple stores are without a doubt one of Apple’s strong marketing assets aside from a solid revenue generator. People go into the stores to experience new devices, seek help with the ones they own and learn how to get the most out of them. Exchanges that I have often witnessed in stores, both in the US and in the UK where I lived, have been of customers met with knowledgeable and invested employees who made each customer feel they cared.
Microsoft has failed thus far to create an in store experience that is helping its brand. Calling it quit now, however, would be the wrong thing to do. Microsoft has never had this much to offer to consumers from an end to end experience. This need to experience – not try before you buy but truly experience – will grow with ambient computing, making a store presence even more valuable.
A Showcase for the Surface Portfolio and Microsoft Apps
Microsoft now has a full portfolio of Surface products that can be experienced in store. On display are not just the products but the vision that Microsoft has of modern computing. From Surface Pro to Surface Book, to Surface Laptop and the more aspirational Surface Studio and Surface Hub all help to tell that story. I was in a store with my daughter recently for a coding camp and seeing how the kids were drawn to the Hub made me wonder why there were not more people in the store doing just that. I am sure there are differences in locations as far as how busy the stores are, but more of a push around devices and experiences could certainly create more buzz.
Back in 2015, Microsoft CEO Satya Nadella said: “we want people to love Windows 10 not just use it.” The same should be said about all Microsoft products including the stores.
Activities in stores have been growing. I have seen more emphasis around STEM as part of the recent education push including Minecraft coding. Yet, more could be done around new apps like Story Remix, or People, or Paint 3D. Stores should have classes to learn how to use these apps, have people in stores using them as customers come in and have them try. This kind of activities will help create a different atmosphere in the store and educate potential customers. It would also help consumers to think more broadly about Microsoft.
Discoverability of new Windows 10 features remains an issue, especially for those consumers who upgraded to it on their old computers. Seeing what is possible might generate an upgrade opportunity and one that will benefit Surface. Surface Pro sales have been growing steadily in the enterprise market but not as much as they could in the consumer one. While many point to cost as an inhibitor, the real issue is the lack of visibility. Many other PC manufacturers have devices at similar price points, and of course Apple does too, so, clearly, there is a consumer market for Surface as well if mass market consumers knew more about it.
A Look to the Future to build Love for the Brand Today
Microsoft is no longer limited to Windows on PCs, and while Cloud and Office365 might be the biggest revenue generators, there are other products that will define the future of computing.
HoloLens stands out.
Enterprises are very interested in HoloLens as there are many applications that can save cost, increase productivity and enrich experiences. Yet, HoloLens has many consumers applications too which could generate reinvigorate the in-store experience. Think about Holographic Minecraft or a walk on Mars. I realize this is still a device that has limited availability and Microsoft might have concerns about dumbing down the experience making it feel like a VR park. Yet, there are opportunities to offer targeted events, limited in numbers that consumers could sign up to.
Microsoft effort to democratize 3D could be another area of focus with classes targeted on developing an object with Paint 3D and then printing it. Again, I realize the delicate balance between creating a buzz and creating a circus, but right now stores have very little buzz.
The big point about Apple stores is that they are first and foremost great experience centers. Microsoft stores feel more like a cross between an IT support center and a Best Buy where I go to buy as a last resort. I go in and get out as quickly as I can. My experience is that Microsoft stores staff is there to sell not to guide me and facilitate my discovery of what Microsoft has to offer.
Creativity is the new productivity is a great slogan for Windows and Microsoft should really look at becoming more creative when it comes to the stores.
Microsoft must deliver a consistent experience across stores focused on a shift from serving customers in a transactional exchange to facilitating customers’ experiences. This might require a change in how stores are evaluated and rewarded. Revenue should not be the short term focus but rather brand awareness and advocacy which in turn will bring increased revenues over time.
NVIDIA continues to bolster its position in the market with an emphasis on machine learning and artificial intelligence in addition to its leadership positions in graphics for mobile, consumer, and professional segments. At SIGGRAPH this week in Los Angeles, NVIDIA announced several new projects that aim to implement an AI angle to graphics specific tasks and workloads, showing the value of AI across a wide spectrum of workflows as well as the company’s leadership position for it.
The most exciting AI announcement came in the form of an update to the OptiX SDK that implements a denoising capability accelerated by AI with a ray tracing engine. Ray tracing has the capability to create highly realistic imagery but comes a high computational cost that forces renders to take minutes or even hours to create complex scenes in their entirety. When these images are in a partially computed state, they can appear to be noisy photographs, with speckled artifacts similar to what you see with photos taken in extremely low light.
NVIDIA and university researchers use deep learning and GPU computing to predict the final output images from those partly finished results in a fraction of the time. The AI model is created using many “known good images” that require time up front but then allow creators and artists the ability to move around the world, changing view angles and framing the shot, at nearly one tenth the speed. The result is a near real-time interactive capability with a high-quality ray traced the image to accelerate the artist’s capability and vision.
Facial animation is one the most difficult areas of graphics production. NVIDIA has found a way to utilize deep learning neural networks to improve the efficiency and quality of facial animations while saving creators hours of time. Instead of manually touching up live-action actors’ footage in a labor-intensive task, researchers were able to train the network for facial animations using only the actors’ footage in a matter of five minutes.
NVIDIA also implemented the ability to generate realistic facial animation from the resulting data with only audio. This tool will allow game creators to implement more characters and NPCs with realistic avatars in multiple languages. Remedy Entertainment, makers of the game Quantum Break, helped NVIDIA with the implementation and claim it can cut down on as much as 80% of the work required for large scale projects.
Anti-aliasing is a very common graphics technique to reduce the jagged edges on polygon models. NVIDIA researchers have also found a way to utilize a deep neural network to recognize the artifacts and replace them with smooth, color correct pixels.
Finally, NVIDIA adapted ray tracing with AI as well, using a reinforced learning technique to adjust the ray paths to those that are considered “useful.” Traces that are more likely to connect lights to virtual cameras (the view port) are given priority as they will contribute to the final image. Wasted traces that go to portions of the geometry that are blocked or unseen by the camera can be removed before the computation is done, lessening the workload and improving performance.
These four examples of AI being used to accelerate graphics workloads show us that the same GPUs used to render games to your screen can be harnessed uniquely to accelerate game and film creators. Requiring fewer man hours and resources for any part of the creation pipeline means developers can spend more time building richer environments and experiences for the audience. These examples are indicative of the impact that AI and deep learning will have on any number of markets and workflows, touching on much more than typical machine learning scenarios. NVIDIA paved the way to GPU computing with CUDA, and it continues to show why its investment in artificial intelligence will pay off.
Apple’s Fiscal Q3 earnings were a bit more interesting than I thought. Much of this interest came from extensive commentary from Tim Cook on a range of things from iPad, to Apple Watch, to autonomous systems, and the iPhone in emerging markets like India. First, let’s start with the iPad.
I mentioned earlier in the year that I was helping researchers at the Clay Christensen Institute as they work to make some revisions to disruption theory for an upcoming book Clay is writing. In the conversations since it has become clear that disruption theory will face more and more challenges in the consumer era. There are key areas where the theory and overall framework are applicable, and there are other areas where they are not.
Work smarter, not harder. That’s the phrase that people like to use when talking about how being more efficient in one’s efforts can often have a greater reward.
It’s also starting to become particularly appropriate for some of the latest advances in semiconductor chip design and artificial intelligence-based software efforts. For many years, much of the effort in silicon computing advancements was focused on cramming more transistors running at faster speeds into the same basic architectures. So, CPUs, for example, became bigger and faster, but they were still fundamentally CPUs. Many of the software advancements, in turn, were accomplished by running some of the same basic algorithms and program elements faster.
Several recent announcements from AMD and nVidia, as well as ongoing work by Qualcomm, Intel and others, however, highlight how those rules have radically changed. From new types of chip designs, to different combinations of chip elements, and clever new software tools and methodologies to better take advantage of these chip architectures, we’re on the cusp of seeing a whole new range of radically smarter types of silicon that are going to start enabling the science fiction-like applications that we’ve started to see small glimpses of.
From photorealistic augmented and virtual reality experiences, to truly intelligent assistants and robots, these new hardware chip designs and software efforts are closer to making the impossible seem a lot more possible.
Part of the reason for this is basic physics. While we can argue about the validity of being able to continue the Moore’s Law inspired performance improvements that have given the semiconductor industry a staggering degree of advancements over the last 50 years, there is no denying that things like the clock speeds for CPUs, GPUs and other key types of chips stalled out several years ago. As a result, semiconductor professionals have started to tackle the problem of moving performance forward in very different ways.
In addition, we’ve started to see a much wider array of tasks, or workloads, that today’s semiconductors are being asked to perform. Image recognition, ray tracing, 4K and 8K video editing, highly demanding games, and artificial intelligence-based work are all making it clear that these new kinds of chip design efforts are going to be essential to meet the smarter computing needs of the future.
Specifically, we’ve seen a tremendous rise in interest, awareness, and development of new chip architectures. GPUs have led the charge here, but we’re seeing things like FPGAs (field programmable gate arrays)—such as those from the Altera division of Intel—and dedicated AI chips from the likes of Intel’s new Nervana division, as well as chip newcomers Google and Microsoft, start to make a strong presence.
We’re also seeing interesting new designs within more traditional chip architectures. AMD’s new high-end Threadripper desktop CPU leverages the company’s Epyc server design and combines multiple independent CPU dies connected together over a high-speed Infinity Fabric connection to drive new levels of performance. This is a radically different take than the traditional concept of just making individual CPU dies bigger and faster. In the future, we could also see different types of semiconductor components (even from companies other than AMD) integrated into a single package all connected over this Infinity Fabric.
This notion of multiple computing parts working together as a heterogeneous whole is seeing many types of iterations. Qualcomm’s work on its Snapdragon SOCs over the last several years, for example, has been to combine CPUs, GPUs, DSPs (digital signal processors) and other unique hardware “chunks” into a coherent hole. Just last week, the company added a new AI software development kit (SDK) that intelligently assigns different types of AI workloads to different components of a Snapdragon—all in an effort to give the best possible performance.
Yet another variation can come from attaching high-end and power demanding external GPUs (or other components) to notebooks via the Thunderbolt 3 standard. Apple showed this with an AMD-based external graphics card at their last event and this week at the SIGGRAPH computer graphics conference, nVidia introduced two entries of its own to the eGPU market.
The developments also go beyond hardware. While many people are (justifiably) getting tired of hearing about how seemingly everything is being enhanced with AI, nVidia showed a compelling demo at their SIGGRAPH press conference in which the highly compute-intensive task of ray-tracing a complex image was sped up tremendously by leveraging an AI-created improvement in rendering. Essentially, nVidia used GPUs to “train” a neural network how to ray-trace certain types of images, then converted that “knowledge” into algorithms that different GPUs can use to redraw and move around very complex images, very quickly. It was a classic demonstration of how the brute force advancements we’ve traditionally seen in GPUs (or CPUs) can be surpassed with smarter ways of using those tools.
After seeming to stall for a while, the performance requirements for newer applications are becoming clear—and the amount of work that’s still needed to get there is becoming clearer still. The only way we can start to achieve these new performance levels is with the types of heterogeneous chip architecture designs and radically different software approaches that are starting to appear.
Though some of these advances have been discussed in theory for a while, it’s only now that they’ve begun to appear. Not only are we seeing important steps forward, but we are also beginning to see the fog lift as to the future of these technologies and where the tech industry is headed. The image ahead is starting to look pretty good.
This past week, I was camping in Tahoe with around 50 friends and family. This group comprised a lot of diversity around age, jobs, and income class. In our group were doctors and nurses, construction workers, police and fireman, software engineers, teachers, students, and those fortunate enough to be retired. While I try not to avoid any talk of work or tech when on vacation, I’m generally the person this group comes to with their questions on things present or in the future. For whatever reason, Siri was a hot topic this year.
Through a range of conversations around Siri, I heard things like “Siri is stupid,” or “Siri doesn’t know anything.” Some variation of this theme came up numerous times, as people shared their frustration with Siri and wondering what she is supposed to be good at doing.
As I dug deeper into what was going on, I realized a key issue is certain expectations that are placed upon Siri. To use business terminology I’m fond of, the question at hand is why consumers are hiring Siri. It became clear that most of the folks in our group used, or tried to use, Siri to search for things on the web and other general information tasks. This was the primary driver of their sentiment toward Siri. When I asked if she did things well like get directions, set a timer, set a reminder, etc., they all said Siri worked perfectly for those tasks.
These conversations and anecdotal data line up with our primary research on this subject. We recently conducted a study on smartphone virtual assistants like Siri and Google Assistant and found that searching for something, or general information queries were among the top five most common behaviors by both Siri users and Google Assistant (Hey Google, Ok Google) users. Interestingly, searching the Internet was the number one thing people do with Google assistant where it is the fifth most common behavior of Siri users. Siri users mostly use Siri to automate tasks that would take them time on their smartphones. Like setting a timer, setting a reminder, calling someone and texting someone.
The key behavior usages between both assistants is not a surprise since Google is, and will always be, better than Siri at searching the web. Google’s mission is to organize the world’s data and they will do that better than anyone. Therefore their AI agent will always be the best at search. The challenge we have today with consumer sentiment around Siri is with its weakness for general Internet search. If consumers do this regularly and want to use a voice assistant for searching and general information queries, Siri is not going to beat Google. Apple’s challenge is to help consumers understand the jobs where Siri is best.
In part, this seems the goal with Apple’s recent short video ads with Dwayne Johnson A.K.A The Rock. Both our data, and my qualitative conversations confirm Siri delivers on the job of getting things done or automating certain tasks inherent to iOS. Knowing Google’s mission is to organize the world’s data, and that mission will manifest itself with their AI agent, I’d interpret Apple’s mission with Siri to organize your life.
Apple is going to have to pick the battles Siri will fight, and I’d argue that organizing your life, and being the assistant of choice to assist you with life itself is the battle I think Apple wants to take on with Siri. Siri has the potential to understand me deeply and perhaps intimately. Siri will know about my family in ways in other digital assistants can. Siri will also understand more completely things like my preferences, core behaviors, likes, and dislikes, places I’ve gone, the food I like (since I take so many pictures of it), and much much more.
While Google will battle here as well in some cases, the reality is they have made their mission clear and are not (yet) trying to turn their assistant into a true personal assistant. We know this because Google Assistant has no name, which is required to build trust and intimacy. This is why Apple has Siri, Amazon has Alexa, and Microsoft has Cortana. Without a name, Google assistant users will only use it to a point but will likely not develop a deeper more trusted relationship with users of other assistants will. Given Google’s business model this makes sense. They would rather not lock themselves out of the iPhone, or Microsoft’s ecosystem and want iPhone users, Windows users, and even Amazon customers to still use Google services. Playing a cross platform game means Google is focusing Google assistant on information, and this is wise.
So this brings up a key hurdle I’m still not sure how we get over. All of these assistants will need to talk to each other because they all solve different problems and are hired for different reasons. Google is my search of choice, Amazon is my shopping platform of choice and Siri is my life assistant. I’d prefer to have Siri talk to Alexa (or the Amazon ecosystem at large), and Google’s search products and get me what I need by using the best mechanism to do so. Perhaps this is asking too much, but this scenario will yield the best customer experience.
The other scenario, is I have to use each assistant independently for different things. This is basically how I use them today. When I want to search for something on the Internet or do a general information query, I use Google Home. When I want to shop or get product information I use Amazon and the Echo. When I need to organize my life and get information on events, calendar, email, contact friends or family, etc., I use Siri.
At least for now, and perhaps the foreseeable future, none of these assistants is poised to succeed well and be hired by a consumer to do everything. They each have their glaring weaknesses and together are pretty powerful. I have a hunch consumers will have more delightful experiences with these assistants if they are hired for the right reasons. The challenge these assistants face currently is consumers do not yet know what those reasons are.
These assistants will ultimately become platforms. Right now there is a land grab for the relationship with a consumer by Siri, Alexa, Google Assistant, and Cortana. My thesis remains, these will succeed best if they remain specialized, however, they will all get along in an ideal world.
As I was digesting Samsung’s Q2 2017 results last week, I realized that I’ve never really taken the time to come up with a robust mental picture of Samsung Electronics as a company, where its revenues come from, and how it all hangs together. I have lots of charts that I produce every quarter for my clients, but in many cases the labels on the charts don’t have a lot of meaning behind them for me. As such, over the last few days, I’ve spent a little time diving deeper and really understanding the moving parts beyond the mobile division that dominates most coverage of the company (and my view of it). Today, I’m going to share what I’ve learned.
This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing AMD’s quarterly earnings, Microsoft’s announcement of custom AI-enabled chip for the next HoloLens, Samsung earnings, and rumors of Apple building three manufacturing plants in the US.
If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast
There’s a long list of industries that have been disrupted by tech. The internet, broadband, and the PC/smartphone have all had a significant impact on how we communicate, shop, create and consume content, book travel, play games, and so on. But I’ve also been thinking about what industries or consumer experiences have NOT been as significantly affected by tech, at least so far. For example, even though Uber and Lyft have disrupted the taxi business, and cars are practically computers on wheels, it still takes as long or longer to get from A to B by car or plane as it did 20 years ago.
So, what are some other examples of industries that have been least disrupted by tech?
The Health Care System. Technology has contributed to enormous strides in the diagnosis and treatment of disease and illness. But the actual medical care system is not significantly more efficient than it was 10 or 20 years ago. Yes, there are now electronic medical records, and some consumers have access to their health history online. But the process of booking appointments, getting a referral, determining the price of a service, or determining the efficacy of a physician or the quality of a hospital or other treatment facility remains a rather arcane process. Sort of like why mobile payments are not more widespread, this is not a technology problem – it’s an industry problem.
Buying and Building A House. I just went through the process of buying a house for the first time in ten years, and it was interesting how the process of applying for and obtaining a mortgage is still very much an analog affair. Actually, there were more forms than ever to fill out…and the house closing – deed, title, and so on – is a fairly time consuming and paper-driven process. About the only improvement is that one can e-sign some of the documents, or scan and email them to the mortgage broker, which saved some stamps.
Christopher Mims of the Wall Street Journal wrote an interesting column recently on how over the past 60 years, productivity in manufacturing has increased eightfold, yet we haven’t seen the same improvements in the $1 trillion construction industry. There are some compelling startups, such as Katerra, that are starting to address this opportunity.
Transportation. Elaborating a bit on the earlier point, segments of the transportation industry, such as taxis, have been disrupted. And cars have an amazing amount of tech in them and are much better built, thereby requiring fewer repairs and lasting longer. But the process of getting around? Tech hasn’t yet solved traffic. We are potentially at the dawn of a new era, with driverless cars (and trucks), smart cities, the Hyperloop, and so on. And big data has the potential to make public transportation systems more cost effective and efficient. This is a space likely to experience more change in the next 10 years than it has in the past 25.
Government Services. Yes, you can pay a traffic ticket, do your taxes, and renew your driver’s license online. But tech has not yet had a significant impact on the effectiveness of government services, or the consumer experience with the public sector. Sort of like the medical care system, many government processes remain arcane and laborious. Consumers don’t have a good view into the quality or effectiveness of public services or their elected officials. A simple example could be providing consumers a dashboard into how quickly streets are cleared after a snowfall. One promising area has been apps for reporting minor issues like potholes or broken street lamps, and the 311 system, which is starting to collect large amounts of data.
The Judicial System. What, there aren’t handcuffs that can be unlocked by an iPhone? From Perry Mason to L.A. Law to the Good Wife…most aspects of the legal process haven’t changed all that much over the years: how lawyers work, how cases are tried, what happens in a courtroom. It’s almost quaint. I served on a jury for several weeks in 2016, in a courtroom more than 100 years old, and it struck me that the experience of a juror, and the whole process or preparing and presenting a case is essentially unchanged. Sure, briefs are typed on PC, and evidence is catalogued electronically, but it does seem that the legal industry is a major contributor to keeping pen, paper, and three piece suits alive…
I’m sure there are some other good examples, but these are the industries that sprung to mind. So, tech hasn’t disrupted everything…
Amazon CEO Jeff Bezos famously talks about the company’s Prime subscription service as an important part of its “flywheel” strategy, through which customers become increasingly tied into Amazon’s ecosystem and end up becoming more loyal and higher spending customers. The chief benefit of the Prime subscription has always been sold as free two-day shipping, but of course, the list of features the service offers has long since grown beyond that to include video and music streaming, access to books and magazines, photo storage and more. Now, it’s even being used as a foundation for selling additional third party subscriptions like TV bundles. It’s increasingly clear that, though the primary purpose of Prime may be selling more goods on Amazon.com, it’s becoming a very powerful platform for selling other things too.
Though I think the Prime perk that’s most often talked about beyond shipping is video, it’s fascinating to see what Amazon has been able to achieve in music, in large part by offering a limited selection of music for streaming as part of the Prime subscription. Though other streaming music services offer 30-40 million songs, Amazon offers a subset of two million through its Prime Music service, and that’s been a popular option. Media recently reported that Amazon now has the number three position in streaming music behind Spotify and Apple Music globally, through a combination of the limited Prime Music service and its separate Music Unlimited service. My own recent surveys suggest roughly one in six Prime subscribers in the US use the music feature at least monthly, and I would bet that Echo adoption plays a role in that, given that Prime Music is integrated into the Alexa function. That’s roughly half the rate of adoption of its video service after a much shorter time in the market.
Speaking of Prime Video, Amazon has invested heavily in the service in recent years, upping its original content spending and competing with Netflix in the catalog-based streaming space. It’s even expanded in Netflix-like fashion to many other countries around the world, though in practice its catalog remains very limited outside of a few key markets. But the more interesting part of its recent video strategy has been its creation of the Amazon Channels service, which allows Prime subscribers to bolt on monthly subscriptions to various channels, from premium networks like HBO and Showtime to niche and foreign content. Recent figures reported by BTIG Research suggest that Amazon alone may be responsible for a significant chunk of the subscribers for standalone streaming services like HBO Now through this channel. The combination of its own video service and these third party services into a bundle creates a pretty unique offering in the market, something really only matched indirectly by the subscription model offered by Apple’s App Store, albeit without a first party subscription as part of the bundle.
Though video and music are the most popular features beyond free shipping, others such as the free access to books and magazines through Prime Reading and the Photo Storage offerings are also used by 10% or more of Prime subscribers in the US. Applied to the likely 80 million plus subscribers Amazon now has globally, that means Amazon is becoming a meaningful player in a number of secondary markets almost incidentally, threatening standalone players who make their whole businesses out of providing similar offerings. Most importantly, Amazon doesn’t need to make any money directly from any of these services – indeed, it likely loses quite a bit of money on its video and music offerings in particular, simply because the benefits of increased stickiness on spending on Amazon.com outweigh any costs.
This week, Amazon was reported to have created a secret group to work on healthcare projects including electronic medical records and telemedicine, while Amazon also recently created calling and messaging apps for its Echo devices and the accompanying Alexa apps. Though it would be tempting to write Amazon off as having no basis on which to build either of these businesses – after all, it’s historically served households rather than providing personalized services to individuals – the businesses it’s built in video, music, and beyond suggest that we should never underestimate Amazon to build new businesses off the back of its Prime subscription base. That doesn’t mean it will always be successful – its Fire Phone was a huge flop, after all – but it does mean that in the right business segments it has a decent shot at building a meaningful subscriber base for new services as a side effect of its investment in the Prime flywheel.
One may wonder, what is the “right” amount of research and development (R&D)? Should a company spend a lot on potential future projects or not? It’s understood that most R&D expenditures will not yield fruit. The vast bulk will never generate profitable businesses or even come to market. How to decide who to underwrite among all those nerdy scientists doing inexplicable things in their labs?
This week, Apple debuted a new ad centered around Siri and starring actor Dwayne “The Rock” Johnson. In “The Rock x Siri Dominate the Day,” Johnson uses Siri, to take on even more in his busy schedule – from reviewing his life goals to helping him navigate the globe to pursue his many interests, to taking selfies in outer space. The ad is fun and it might help some iOS users try new things with Siri, although I am pretty sure nobody will try a selfie from outer space! Overall though, the ad is an attempt to make Siri sexy as Apple prepares to bring HomePod to market later in the year.
In my short but very exciting Twitter exchange with “The Rock”, he suggested I should watch the ad before jumping into my column for this week.
Hah just wait til we launch the commercial tomorrow then jump in the column 🙌🏾
— Dwayne Johnson (@TheRock) July 24, 2017
Watching the ad made me reflect, once again, on how much more work there is to be done both from the technology side and on the users’ acceptance side. I started thinking about what I would be looking for if I were interviewing to hire the perfect digital assistant.
I am a busy working mom looking for a full-time digital assistant who could help me breeze through my day as if I were in total control, having a world of fun with my family yet not missing a beat at work. Most of all, I need a digital assistant that gets a lot done for me while making me feel I am doing it all on my own!
This job is not for the fainthearted! It’s a 24/7 position where you should not expect any please and thank yous. Some cross-room shouting might occur and you will be entitled to zero days off!
You might also have:
If interested, please apply below and we can discuss compensation and benefits!
Over the last few weeks, I have had to field numerous calls from various people in the media asking me to respond to the rumors suggesting that Apple’s new iPhone with an OLED screen could cost well over $1000 dollars.
While a $1000.00 plus cost of an iPhone may be shocking to many, the reason behind it is basic economics of supply and demand. And given the questions I have gotten from many of the media who have asked me about this, it seems that they either did not take economics in school or are just not seeing the big picture around this important economic principle.
Honestly, it feels as though Flash has been dead for some time. But today Adobe officially announced the end of life for Adobe Flash and all I can say is YES! Flash is an absolute resource on both CPU and battery and the Internet is and will be, a better place without it.
No one likes to think about limits, especially in the tech industry, where the idea of putting constraints on almost anything is perceived as anathema.
In fact, arguably, the entire tech industry is built on the concept of bursting through limitations and enabling things that weren’t possible before. New technology developments have clearly created incredible new capabilities and opportunities, and have generally helped improve the world around us.
But there does come a point—and I think we’ve arrived there—where it’s worth stepping back to both think about and talk about the potential value of, yes, technology limits…on several different levels.
On a technical level, we’ve reached a point where advances in computing applications like AI, or medical applications like gene splicing, are raising even more ethical questions than practical ones on issues such as how they work and for what applications they might be used. Not surprisingly, there aren’t any clear or easy answers to these questions, and it’s going to take a lot more time and thought to create frameworks or guidelines for both the appropriate and inappropriate uses of these potentially life-changing technologies.
Does this mean these kinds of technological advances should be stopped? Of course not. But having more discourse on the types of technologies that get created and released certainly needs to happen.
Even on a practical level, the need for limiting people’s expectations about what a technology can or cannot do is becoming increasingly important. With science-fiction-like advances becoming daily occurrences, it’s easy to fall into the trap that there are no limits to what a given technology can do. As a result, people are increasingly willing to believe and accept almost any kind of statements or predictions about the future of many increasingly well-known technologies, from autonomous driving to VR to AI and machine learning. I hate to say it, but it’s the fake news of tech.
Just as we’ve seen the fallout from fake news on all sides of the political perspective, so too are we starting to see that unbridled and unlimited expectations for certain new technologies are starting to have negative implications of their own. Essentially, we’re starting to build unrealistic expectations for a tech-driven nirvana that doesn’t clearly jibe with the realities of the modern world, particularly in the timeframes that are often discussed.
In fact, I’d argue that a lot of the current perspectives on where the technology industry is and where it’s headed are based on a variety of false pretenses, some positively biased and some negatively biased. On the positive side, there’s a sense that technologies like AI or autonomous driving are going to solve enormous societal issues in a matter of a few years. On the negative side, there are some who see the tech industry as being in a stagnant period, still hunting for the next big thing beyond the smartphone.
Neither perspective is accurate, but ironically, both stem from the same myth of limitlessness that seems to pervade much of the thinking in the tech industry. For those with the positive spin, I think it’s critical to be willing to admit to a technology’s limitations, in addition to touting its capabilities.
So, for example, it’s OK to talk about the benefits that something like autonomous driving can bring to certain people in certain environments, but it’s equally important to acknowledge that it isn’t going to be a great fit for everyone, everywhere. Realistically and practically speaking, we are still a very long way from having a physical, legal, economic and political environment for autonomous cars to dramatically impact the transportation needs of most consumers. On the other hand, the ability for these autonomous transportation technologies to start having a dramatic impact on public transportation systems or shipping fleets over the next several years seems much more realistic (even if it is a lot less sexy).
For those with a more negative bias, it’s important to recognize that not all technologies have to be universally applicable to make them useful or successful. The new relaunched Google Glass, for example, is no longer trying to be the next generation computing device and industry disruptor that it was initially thought to be. Instead, it’s being focused on (or limited to) work-based applications where it’s a great fit. As a result, it won’t see the kind of sales figures that something like an iPhone will, but that’s OK, because it’s actually doing what it is best designed to do.
Accepting and publicly acknowledging that certain technologies can’t do some things isn’t a form of weakness—it’s a form of strength. In fact, it creates a more realistic scenario for them to succeed. Similarly, recognizing that while some technologies are great, they may not be great for everything, doesn’t mean they’re a failure. Some technologies and products can be great for certain sub-segments of the market and still be both a technical and financial success.
If, however, we keep thinking that every new technology or tech industry concept can be endlessly extended without limits—everything in my life as service, really?—we’re bound to be greatly disappointed on many different levels. Instead, if we view them within a more limited and, in some cases more specialized, scope, then we’re much more likely to accurately judge what they can (or cannot) do and set expectations accordingly. That’s not a limit, it’s a value.
One of the most fascinating things about the consumer technology industry is the range of business models in evidence among the various companies. Though software may indeed be said to be eating the world, what’s fascinating to me is that almost no business models are based on selling software. Instead, we’re seeing the rise of two dominant business models in almost all of consumer digital media: subscriptions and advertising. And as these take over on the content side of the industry, they’re more likely to take increasing share of other parts of the industry including hardware as well.
With the huge amount of interest in the new iPhones, rumors are rampant about their features. We’ve heard from industry and stock analysts, tech bloggers and other followers of Apple, reporting on rumors from insiders at the manufacturing plants, accessory companies, and component suppliers.
The interest is well-founded. It’s been three years with no significant external changes to the iPhone. This is the tenth anniversary of the introduction of the first iPhone, and the competition – notably Samsung and Google – have made significant strides in offering competitive products. Most importantly, the iPhone represents almost two-thirds of Apple’s sales.
I thought I’d take a different approach in speculating what’s ahead. As a product design consultant, but, more importantly, as a product reviewer that examines many products throughout the year from the perspective of the customer, I’m going to offer what I think customers would value the most and cause them to upgrade. In other words, what new features do iPhone users really want.
The top item on my list for improvement is a longer battery life. I’ve talked with dozens of iPhone owners, and the number one complaint I hear is the need to recharge their phones to get through the day. I carry an iPhone 6 with a recently replaced battery in an Apple battery case and often run out before dinner time. Granted the newer models have a little more capacity, but still not enough to match the increased use of phones for the many new activities we do. Apple has erred on the side of thinness at the expense of battery life, and I hope they’ll fix this major weakness.
My second priority is the display. Apple has not improved its resolution and basic design for the past three years and has fallen behind the competition, particularly Samsung. I’ve been using a Samsung Galaxy S8 from AT&T side by side with my iPhone, and invariably I’ll reach for the S8 when I want to do any extensive reading of email or Internet content. The display on the S8 is much higher in contrast and sharpness, and the characters appear to float on the surface of the display rather than sit below it.
Third is the form factor. Anything Apple can do to provide a greater display area in a physically smaller package is a benefit to the user. More text on the display means less scrolling. Samsung has accomplished this by eliminating the bezels along the sides and going to a more elongated display that results in a significantly larger display area in the same overall package size.
Fourth is durability. If Apple were only to offer a phone that could survive a 36-inch drop onto a sidewalk and a long dip in the pool, they’d be well ahead of the Samsung S8, that’s one of the most fragile of phones. It’s all glass construction is as fragile as a Riedel wine glass. We’re already seeing added durability on some Lenovo Motorola phones and Samsung’s Active models.
I know Apple will do a great job with the ID as they always have. While a great ID can wow us, as the S8’s ID wowed me, it’s something that becomes a little less important after the initial excitement wears off. I’d much prefer an ID that incorporates the important functions noted above, than one that compromises those features for design sake.
Lastly, one of the most important customer features Apple offers is its superb customer support that none of their competitors can match. If you’re near an Apple Store you can often get service while you wait and get a wide range of assistance at no cost from well-trained, attentive employees.
But increasingly often you need to wait several days to get an appointment with the Genius Bar. I’d like to see Apple offer an even higher level of service, particularly for those that are not near an Apple store: deliver a replacement or loaner phone to your home in 24 hours or less, much like Amazon can do with an order.
I’ve left off other features that others are actively anticipating and even wishing for, such as wireless charging, facial recognition, embedded fingerprint readers, and curved displays. That’s because I don’t think they matter as much to the customer as they do to those of us that cover technology. People learn to sign in one way or another and give little thought to it once they do it a few times as long as it’s secure. Wireless charging is a minor convenience that becomes less important with a longer battery life.
I have no idea whether Apple will do any of these things, but for the sake of providing what most benefits their customers, I hope some of these features will be included.
This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the recent earnings report from Microsoft, the release of Samsung’s Bixby Voice-based UI, the re-introduction of Google Glass, and more.
If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast
News this week that Intel has eliminated its wearables team, which was part of its New Technologies Group, is just the latest in what seems like an ongoing drumbeat of bad news about the category. In recent times pioneering brands such as Pebble and Jawbone have departed the market, and the brand synonymous with the category, Fitbit, has faced a string of challenging quarters. But while broad interest in the category may be in a lull, I’m convinced the technology still has an important role to play going forward, across a wide range of use cases. Companies that throw in the towel now may regret doing so down the road.
When wearables first started to gain attention in the market, too many people—industry analysts included—attached huge expectations to the category. Major brand such as Samsung entered the market with great fanfare. Google rolled out Android Gear with numerous partners. And Apple launched the Apple Watch. The hype reached a fever pitch as people and companies, looking for the next big thing after smartphones, pinned their hopes on wearables. The problem: Too many were unwilling to accept the fact that we’re not likely to see again any tech product that ships as many units, or impacts the world as greatly, as the smartphone.
And so, for many, the wearable market seems to be a bust, unable to live up to the unreasonable expectations placed upon it. However, while it is certainly true that many of the early brands that entered the market have fallen on hard times, the market itself has actually continued to grow at a reasonable clip. In 2016, IDC estimates the total wearable market, which includes everything from fitness trackers to smartwatches, smart ear-worn devices to smart clothing, grew to nearly 105M units for the year. That’s a year-over-year increase of 27%, with revenues that totaled about $16.3B. And the market will continue to grow for the foreseeable future. By 2021, IDC predicts the market will hit 240M units with revenues well north of $37B.
New Expectations, New Opportunities
While many in the industry have moved on from wearables, Apple is clearly still focused on the category. While WatchOS didn’t receive much stage time at WWDC, the company continues to build out the platform, and there is no doubt we’ll see more hardware down the road. And the watch isn’t the only body-worn product Apple has in the market. It’s AirPods, on back order since launching at the end of 2016, have quietly become many Apple fans’ favorite new product. And what makes the AirPods special isn’t just the elimination of the wires, but the custom Apple silicon on board that brings a level of interaction capabilities to the product that you can’t find with just any Bluetooth headset. The number of new features and functionality that Apple could eventually tie AirPods is sizeable. And it points to one of the luxuries of being Apple: It can play the long game. It doesn’t have to make outsized profits from Apple Watch or AirPods to keep the products in the market and ever evolving.
I’m convinced that Apple believes what I do: That there are still numerous opportunities for wearables in the markets where they currently play (Jan Dawson’s recent column illustrates this well). And perhaps more importantly, the category will play a key role in enabling other new technologies and capabilities down the road.
Near term, in addition to the evolving story around health and fitness, I expect wearables to play an increasingly important role in the areas of biometrics, security, and digital payments. Longer term, however, is where things get even more interesting, as wearables are likely to play an important role in the evolution of human to machine interfaces. I’m especially excited about the opportunities that will present themselves as augmented reality technologies come to fruition. In the future, we’re likely to interact with technology in a wide range of ways, using our eyes, our ears, our voice, and our hands. Wearables may prove to be the single best way to capture much of this input.
In other words, if you don’t believe tapping on glass is the best we can do in this regard, then you should continue to watch the wearables space. Vendors may come and go, and the volumes won’t come close to that of the smartphone, but the category may well be an important predictor of the future.
This week, I was driving in my neighborhood when I spotted that most American of sights: a bunch of kids running a lemonade stand, waving signs and trying to flag down passing cars. In some ways, it seemed like a great business opportunity – the temperatures where I am have rarely dipped below the high 90s Fahrenheit lately. And yet I didn’t stop – not because I don’t like lemonade (or kids), but because I simply don’t carry cash anymore, and I’m fairly sure the neighbor children weren’t taking credit cards. That got me thinking about all the people and sectors of our economy which are still dependent on cash, and how they might be affected by our increasingly cashless society.
Whether anecdotally or based on solid data, I think most of us have a sense that cash is in decline. One study from last year suggests that cash is the preferred payment method of just 11% of US consumers, with 75% preferring cards. In other markets such as China, cash is dying out even more quickly, with mobile payments increasingly eating into both its share and that of cards. Though my local dry cleaner in New Jersey was a rare (and suspicious) exception, I very rarely come across businesses that don’t take cards, to the extent that it now really takes me aback when it happens. For many of us these days, credit and debit cards and to a lesser extent mobile payments are making cash largely irrelevant. I still have a huge jar of loose change I accumulated over many years and which now mostly gets used for the occasional school lunch or visits from the tooth fairy, but not much else.
However, assuming that this pattern holds for everyone would be a mistake. There are still big sectors of the economy, and large groups of people, who remain heavy users of cash and heavily dependent on it, and as others move away from it, that’s increasingly going to cause them problems. Sadly, this likely applies most to some of the more vulnerable and marginalized parts of our society, who will be least in a position to make the changes necessary to keep up as the rest of society moves on.
Here are just a few examples of people or businesses still dependent on cash:
The list could go on much longer than that, but the point is that there are those who are in some cases heavily dependent on cash and relatively powerless to make the changes necessary to keep up. These are often among the poorer and least educated people in our society, and therefore those with least access to technology, the traditional banking infrastructure, or information about how to adapt.
The tech industry has offered partial solutions, but mostly in self-serving ways. Payment processing company Square has transformed many a small retailer or producer from a cash-only business to one that can take credit cards and even Apple Pay, and created ways for those without traditional cards to carry balances and make payments with their phones. Amazon has introduced methods for those who deal mostly in cash to obtain one-off or refillable cards to be used to pay for things on its site. Venmo has turned erstwhile cash transfers into electronic payments. But these solutions mostly tear down limits to the addressable markets for their own products, without necessarily expanding economic opportunity or promoting inclusion, while also often being based on internet and mobile technology not available to all.
What we need is solutions for the rest of society, and especially for those without access to the internet and phones to be able to receive non-cash payments. What about an app that allows patrons or would-be donors to set up a transaction in an app, and allows the recipient to walk into a bank or store to pick it up in cash with a privately shared code? Or an app that allows users of basic smartphones to receive payments and carry a balance without creating an ongoing relationship with the payer? What about a service that would provide meals, access to beds and other facilities, or other needed items to the homeless based on donations from smartphone users? Technology has such an enormous potential to reduce friction and make payments simpler, but what we need are innovations that do the same on the receiving end, including in ways that don’t themselves require technological solutions.
I feel like calling on the tech industry to step up to big societal problems has been something of a theme lately in my columns, but I can’t help but think that this is yet another area where those already most on the fringes of society will just be left further marginalized by technology rather than brought into the fold by it. It doesn’t need to be that way: the bright minds who have created so many technologies that help us deal with our “first world problems” can surely find ways to help those with more biting and pressing challenges as our society continues to evolve.