Technology writers love to write about disruption: When a simpler, less costly product or service disrupts an incumbent one that’s more complex and costly. One industry that’s ripe for disruption is financial services. Any non-finance expert who’s had to deal with a 401K, IRA, or other investment instrument knows that they are often outrageously complicated. And the actively managed mutual funds that tend to make up the bulk of these offerings are quite expensive, with fees of 1-2% on the dollars invested, which occur in addition to the fees of your personal advisor. In recent years, a handful of companies—often given the unfortunate label of robo advisors—have introduced technology-based products that promise to simplify the process for investors, at lower costs than traditional services. (NOTE: I’m not a financial advisor, so don’t construe the following as financial advice.)
Using Tech to Invest Robo-advisors utilize algorithms to do a long list of things that have traditionally fallen on the investor to do themselves, or for which they’ve typically paid a human advisor to handle. Instead of using actively managed funds, that carry the aforementioned fees, they tend to use low-cost index funds or exchange-traded funds (ETFs). And instead of charging another 1-2% on top of this to manage your account, they tend to charge on average about 0.25% (some even offer to manage up to a certain dollar amount for free). For these companies, it’s it is all about scale. Here are some of the things they can do:
Pick the initial stocks. The most daunting task for any novice investor happens right up front when they need to pick the stocks into which to invest. Robo advisors do this for you, by asking a series of questions to ascertain your risk tolerance and goals. Most also factor in your existing assets. While many existing financial services offer similar tools, in the end, they typically still force you to choose your stocks based on the automated advice. Conversely, robo advisors handle it all, explaining what they plan to buy and why they plan to buy it. As an investor, you can then choose how much you know about your portfolio, from just a top-line number to high-level asset class descriptions to individual fund names.
Rebalance the portfolio. Most traditional financial services will rebalance a portfolio—moving funds from one bucket to another to stay within prescribed investment targets—on a quarterly basis at best. Because robo advisors are fully automated systems, most rebalance a portfolio any time it gets out of balance, not at a set time or date. Tax-lost harvesting. High-dollar investors have had access to tax-loss harvesting from traditional financial firms for years, but robo advisors make this complicated process available to the average investor. It is essentially the selling of a stock that has declined in value to offset taxes on other stock gains.
Not for Everyone Different robo advisors offer other additional features, but the underlying theme is the same: Using technology to replace high-cost, high-human touch services with lower cost, highly automated systems. The result is that more people gain access to good investment services, and potentially they get to take home more of their investment earnings long-term.
Obviously, not everyone will be comfortable turning over their money to a highly-automated system. Many people will always want to know that there is a real-lie person watching over their money, even if that means paying more. At present, robo advisors make up an exceedingly small percentage of the funds under management in the world. But many of the large, existing financial service firms have recently moved to create similar offerings to those from VC-backed startups. Online reviews of these me-too services have been largely lukewarm, as they tend to cost more and offer fewer features, which is what commonly happens when incumbents attempt to address the entrance of disruptors. It will be very interesting to monitor the growth of robo advisors over the next couple of years, to see how much of the market they capture, how they impact the broader financial markets, and how well their customers fare over time.
After launching an updated for mobile, Skype gets a face lift for PC and Mac. For non-Windows 10 users, the experience is similar to the mobile experience but it takes advantage of the larger screen. Group chats, calls and screen and photo sharing in real-time are some of the areas that deliver an improved experience. Message reactions, mentions, and a notification panel have been added to provide more information about your activity.
We live in an era that is hard for many to understand because we are observing many companies winning. It may not be obvious how or why they are winning but, I’m certain, we need to redefine the idea of winning. As my friend Benedict Evans (you will hear his name often in this post since we got lunch yesterday and had an intellectually stimulating chat!) likes to say, “Apple and Google both won mobile.” For many who say the first computing era pass and observed only one winner, Microsoft, the idea that multiple companies can win may seem far fetched. Yet, that is exactly what happened. Both Apple and Google won mobile in their own ways.
A couple of weeks ago, I wrote in this space about the increasingly self-reinforcing dominance of a group of very large companies in the tech industry. Those companies, I argued, have made it all but impossible for smaller companies to break into the industry, to grow, and to build sustainable businesses without being either wiped out or acquired in the process by the industry’s giants. However, though as a general rule all of that is true, there are some exceptions out there, in the form of a handful of small but successful companies that have somehow managed to survive surrounded by much larger competitors. It’s worth looking at some of them and how they’ve achieved what they have, to see if there are lessons for others.
Anker – Compete Where Others Don’t Want To
Anker was, in fact, the company that made me want to write this piece. It was in the news last week when it launched an Amazon Echo competitor based on Alexa, just the latest in a fascinating series of consumer electronics products which started with batteries and went on from there. The Verge did a great profile on the company and its history, which is well worth a read if you’re interested. The key to Anker’s success seems to have been a narrow focus on competing in areas where the big players really didn’t want to. That began with accessories like batteries, first replacement batteries and then external batteries for smartphones, a place where the big companies either didn’t want to play at all or wanted to offer products at margins that provided a nice price umbrella under which companies like Anker could compete.
But the company has taken that starting point and grown from there, expanding into home automation devices and arguably taking the same quality-plus-affordability approach it took to accessories, with the Echo Dot competitor the latest example of that push. Because it undercuts others on price but has built a reputation for reliability, it occupies a somewhat unique niche between the low-cost no-name brands out of China and the more expensive stuff from the big established brands. There are a few lessons here for others: compete where the big players don’t want to; build from an innocuous base to compete more directly with the larger companies; and, lastly, don’t forget that brands don’t all have to be built at the high end.
Roku – Be Switzerland in a World at War
Roku is another company that stands out as a rare exception – a smaller player which competes head on with some of the biggest names in the business and yet has not only survived but thrived in terms of market share. Roku started out as an arm of Netflix, making hardware for its fledgling streaming service, but was soon spun out on its own and has since made a business out of providing the neutral TV box in a world where essentially all the major competitors are owned by big ecosystems. Though Apple, Google, Amazon, Microsoft, and Sony all have offerings in the space, Roku has the largest market share in the US, through a combination of a range of price points and a certain neutrality in the ecosystem wars.
But Roku’s next big step was pivoting from its focus on first-party hardware to providing a platform for others under threat from the big ecosystems, offering its operating system as a way for other smaller players to break into the smart TV space and bring a compelling and rich set of apps to that market. Again, it offered neutrality where others offered only walled gardens and ecosystem favoritism, and has now gained substantial market share in the smart TV operating system space too. That pivot is still in its early stages, and Roku still likely makes a good majority of its revenue from hardware rather than licensing, but that balance will continue to shift as Roku prepares for an IPO. The key lesson here appears to be that there can be an opportunity in being the neutral player that offers an alternative to warring ecosystems, especially when none of those ecosystems has established a dominant position.
Airbnb – Create Brand New Markets in the Digital Layer
Airbnb is another fascinating company that has come from nowhere over the last few years to build a large and seemingly profitable business in the midst of otherwise dominant ecosystems. And it’s done it largely by creating a new market rather than competing in an existing one. Airbnb exists in what I call the Digital Layer – a business model in which infrastructure-light companies leverage existing physical infrastructure and proprietary software to connect buyers and sellers in such a way that new markets or liquidity are created. Arguably the biggest and most successful companies that have emerged over the last few years in the consumer tech industry all fall into this model – Uber and Lyft are the other big examples, but there’s a plethora of smaller ones too.
The key here is recognizing that a consumer tech company doesn’t have to compete in the consumer tech market and in fact the best opportunities exist outside the tech industry in traditional markets like accommodation, transportation, and retail. By digitizing those markets, these companies create new value that wasn’t there before, often enabled by ordinary people with no history in those markets who choose to supplement earnings or make their main income in this way. Creating just the right user experience, removing barriers and simplifying transactions through smartphone apps and other digital tools, then provides the differentiation needed against legacy business models. The big ecosystems so far haven’t participated in these markets at all, though ride sharing seems to be the market segment they’re most likely to enter, with both Alphabet and Apple dabbling already. But the lesson here is competing outside the constraints of the traditional tech industry and creating an opportunity where others didn’t see one.
No Simple Answers
For the purposes of this column, I’ve necessarily kept things pretty simple here, and arguably oversimplified somewhat what’s made these companies successful. In addition, these companies I’ve discussed are by no means the only small successful tech companies to have emerged over the last few years, and there are other strategies to be achieve what they have. They do demonstrate that, however high the odds against success in a market dominated by giants, there are opportunities to be both found and created, and that it is still possible for the right combination of skill, timing, and smarts to carve out a niche where the big players won’t squash you. At least not right away.
The high-end of the consumer market, often paralleled with the idea of prosumers and enthusiasts, if often overlooked as a segment with little import on the overall sales and profitability of technology companies. Though unit sales in this window are smaller than either the mobile or mainstream consumer space, the ASPs (average selling prices) skew high, resulting in much better profit margins than in lower segments. Not only do companies that successfully address the needs of the prosumer and enthusiast enjoy the ability to sell at lower financial risk, but there are also fringe benefits of being the market share and mindshare leader in these spaces.
The “halo effect” is one in which a flagship product that dominates headlines and performance metrics in the enthusiast markets sees benefits waterfall down into the more modestly priced hardware. Samsung softens a beneficiary of this idea, selling the Note and S-series of smartphones at high prices that convince those with smaller budgets to buy similar looking and feeling Samsung products once they enter the store. Influencers that do buy into the flagship product series will tout the superior benefits of these products to friends, family, and social groups. This gives confidence to system integrators, corporate buyers, and other consumers that the product they CAN afford will be similarly excellent.
In the PC field, prosumer and high-end segments will frequently reuse technology from workstation or data center class products. This saves on development time and costs, adding more to the profit margin of the already inflated segment.
For these reasons and more, it has been a weight around AMD’s ankle that it has not been competitive in either high-end desktop GPUs (graphics processors) or high-end desktop CPUs (central processors) for years. On the graphics front, the last high-end desktop (HEDT) product released was the Radeon Fury X, in June of 2015. Even at the time of launch, the product was moderately successful, bringing attention to AMD and the Radeon brand, but also difficulties in product reviews and quality control. In the span of the next 3-4 months, NVIDIA and its GeForce product family had completely retaken the leadership position with the never-challenged GTX 980 Ti product. Then in May of 2016, NVIDIA increased its leadership position with another GTX family launch. This happened again in March of 2017 when it thought AMD might be on the verge of releasing a competitive solution. It never showed.
The newly released Radeon RX Vega product line brings AMD back into the high-end prosumer and enthusiast picture, offering competitive pricing and performance against the NVIDIA lineup. It utilizes the same graphics architecture found in the workstation family Radeon Pro cards and the enterprise-class high-performance compute Instinct family. Though there are early reports of stock and availability problems that AMD is working through in the coming days, RX Vega gives AMD an opportunity to take back some amount of market in this influential space. The profit margin of RX Vega is questionable, with known cost concerns around the graphics processor and chosen memory solution. In the CPU space, AMD has never been in the HEDT segment that was created by Intel in 2008. In fact, AMD has been absent from the majority of competitive processor segments for more than a decade, depending on the integrated graphics portion of its designs to keep it afloat in trying times. With the release of Ryzen 7 in March of 2017, AMD started making waves once again. August sees the launch of the new Threadripper family, a high-performance processor that directly targets content creators, developers, engineers, and enthusiasts. Prices on these parts range from $799 to $999 and because of heavy repurposing of server design, chip organization, and infrastructure will likely have exceedingly high-profit margins.
Threadripper doesn’t just make AMD competitive in a space that has previously been 100% dominated by Intel; it puts it in a leadership position that is turning heads. Performance in workloads for video creation, 3D rendering, ray tracing, and more are running better on the 16-core implementation that AMD offers compared to the 10-core designs that Intel is presently limited to.
While there are no guarantees of market share improvements or profitability, every unit sold of RX Vega, and Ryzen Threadripper mean improvements for AMD over Intel and NVIDIA. Intel product managers and executives, already awoken from slumber with Ryzen 7 in March have perked up, seeing the threat of mindshare, if nothing else. The company is wary of threats to its perceived dominance and will react with lower prices and higher performance options this year.
RX Vega is in a tougher spot, unable to come out as a clear winner in the field, even for a short while. NVIDIA has been sitting on a growing armory of designs and product, waiting to see how the competition would shake out to measure the need to release it. For now, NVIDIA doesn’t appear to be overly concerned about the impact Vega will have in the high-end consumer spaces.
No product portfolio is perfect, but the CEO Lisa Su and the executive team at AMD must be pleased with the recent shift in the company’s perception in the flagship markets for consumers. The Radeon group can finally point to RX Vega as being a reasonable option against all but the top-most GeForce offerings and managing to gain a performance to dollar advantage in part of it. For the processor division, Threadripper is a marvelous use of existing technology to address a market that has nothing but room to grow. The marketing and partnership opportunities have and continue to flow for AMD here, and Intel will be spinning for a bit to regain its footing. There are significant hurdles ahead (continued graphics innovation, competing in the mobile processor space) but AMD is surging upward.
In a post a few weeks ago, I talked about the growing body of data suggesting product segments most susceptible to a form of disruption theory known as low-end disruption. Through a series of recent conversations I’ve had with some investors, business school teachers, and “thought-leaders” it became clear to me many supposed smart minds still fall into a dangerous trap. I’m calling this trap commodity thinking, and I’m doing so for a few reasons.
Understanding one’s true role and purpose is one of life’s greatest challenges. But it’s not supposed to be that way for devices. If they are to be successful, tech gadgets need to have a clear purpose, function, and set of capabilities that people can easily understand and appreciate. If not, well…there is a large and growing bin of technological castoffs.
Part of the reason that the wearable market hasn’t lived up to its early expectations is directly related to this existential crisis. Even now, several years after their debut, it’s still hard for most people to figure out exactly what these devices are, and for which uses they’re best suited.
Of course, wearables are far from a true failure. The Apple Watch, for example, has fared reasonably well. In fact, revenues from the Apple Watch turned the tech juggernaut into one of the top two highest grossing watchmakers in the world—though I’m starting to think that says a lot more about the watch industry than it necessarily does about smartwatches or wearables in general.
The problem is that we were led to believe that wearables—particularly smartwatches like the Apple Watch—were going to be general purpose computing and communication devices capable of all kinds of different applications. Clearly, that has not happened, though some seem to hold out hope that the possibility still exists.
Those hopes were particularly strong over the last few days with rumors about both a potential LTE modem-equipped version of the Apple Watch coming this fall and a potential deal between Apple and CIGNA to provide Apple Watches to their health insurance customers. Some have even argued that an LTE-equipped Apple Watch is a game-changer that can bring dramatic new life to the smartwatch and overall wearable industry.
The argument essentially is that by finally freeing a smartwatch from the tyranny of its smartphone connection, the smartwatch can finally evolve into the general-purpose tool it was always intended to be. Applications that depend on a network connection can run on their own, duplicative efforts on the watch and the phone can be eliminated, and who knows, maybe we can finally get the Dick Tracy videophone watch we’ve always dreamt of.
Color me skeptical. Sure, it would be nice to be able to, say, use Spotify or other streaming apps to get dynamic playlists as you exercise, or get texts and other phone-related notifications while you’re away from your phone. Industry-changing and market moving, however, it is not—especially when you factor in the additional costs for both the modem and the service plan you’re going to have to sign up for as well.
Plus, let’s not forget that several vendors (notably Samsung and LG) have already released modem-equipped smartwatches, and they haven’t exactly stormed up the device sales charts. This is due, in part, to the same basic physics challenge that Apple will also have to face: add a modem to a device and it will reduce battery life. Given that many people are frustrated with the battery life on their existing smartwatches, having to dramatically (or even minimally) increase the size of the device in order to accommodate a larger battery, seems like a strong challenge—even for the device wizards at Apple.
The potential of crafting a more healthcare friendly smartwatch, on the other hand, seems much more appealing to me and the alleged tie-up with CIGNA could be a very interesting move. Apple was rumored to have some very sophisticated sensors in the works when the Apple Watch was first announced—such as a non-invasive blood glucose monitoring component, and a pulse oximeter—and with every new release there’s increased expectations for those components to finally arrive. If (or when) they do, the healthcare benefits could prove to be significant for people who choose to use the device. Of course, the need to report all that data back to your insurance company on a regular basis—as a connection with a healthcare company certainly implies—will undoubtedly raise a number of privacy and security-related concerns as well.
Even if those new sensors do appear on the next generation Apple Watch, however, they will further cement the growing sentiment that wearables are actually specialty-purpose devices that are really optimized for a few specific tasks. Not that that’s a bad thing—it’s just a different reality than many people envisioned.
In the end, though, dispelling the myth that wearables can or should be general purpose devices could, ironically, be the very thing that helps them finally reach the wider audience that many originally thought they could.
One of the markets I track closely is the US wireless industry, and especially the five largest providers: AT&T, Sprint, T-Mobile, TracFone, and Verizon Wireless. All of these companies recently reported their financial results for Q2 2017, and as a result we now have a good picture of what happened in the quarter. Here are three key insights from those results.
This post was originally published for subscribers of the Tech.pinion Think.tank. To learn more about our subscription service and exclusive analysis click here.
As of late, I’ve been wondering if we are thinking about the evolution of automobiles, transportation, commuting, etc., completely backward. I’ve been reading dozens of reports, and research from component and supply chain vendors on electric vehicles and it is clear a big trend to move to electric powertrains is upon up. However, everyone assumes for the moment, the future autonomous vehicles and commuter transportation systems will look similar to the cars we know today. I believe a form factor shift will take place as well at some point.
There is no doubt we will first convert existing cars into electric as the first step in evolving how cars are made, function, and reach full level five autonomy. In case you have never seen the chart explaining the levels of autonomy, and where we are today, here is a useful diagram.
As you can see from the chart, we are only at the level 2 stage of autonomous technology. Looking over how the other levels were defined, it is likely it will take many years still to reach full autonomy where no interaction from the passenger is ever needed.
Before autonomous cars become a reality, we need to transition to electric vehicles. Most reports on the automotive industry indicated electric cars would be cost competitive with gasoline cars around 2020-2021. This is a big first step to get consumers to make a move to electric vehicles and then over time autonomous driving features.
The timeframe from 2020-2030 is what most experts estimate will be the decade where electric cars gain traction and start moving toward true level 5 autonomy. It may take until 2025 or so before we see level 5 reached and approved by regulators.
Massive change in automotive manufacturing is being implemented at the moment as nearly all auto brands are in the process of shifting to electric vehicle production and working on their autonomous driving strategy. But part of me wonders if level 5 autonomy won’t be reached and go mainstream in things that look like cars today but a different form factor.
Through a range of conversations I’ve had with investors, and other experts, it seems there is a chance the technology that will someday scale and become pervasive may evolve more from electric bicycle technology than car technology. If in the future, we simply become passengers and not drivers, then there is no need for these large, cumbersome vehicles on the road. There are already very small single and double passenger “pods” on the road today, and they are a much more efficient use of space to store and on the road. I can imagine our full autonomous future being made up of these much smaller pods transporting passengers than large cars.
The promise of full autonomy has always been that cars will be able to talk to each other and therefore can be packed in much closer together on the road. Nearly all simulations of the future you see simply have a highway packed with full size or compact cars all within a few feet from each other. This is so more people can fit on the road. But flip the equation and imagine we are all in smaller pods where four could fit on the road in the same space a mid-size car fits in today, and you technically can fit four times as many people on the road.
There will no doubt be a mix of sizes since some families will need larger vehicles and we will still have buses and trucks on the road, but I have a strong sense the vast majority of commuter vehicles will be small pods vs. larger cars like we have today.
This scenario makes sense as it requires less battery, smaller drivetrain, fewer sensors, etc. What got me started thinking about this was my own experimentation and testing of a range of electric bicycles. I tested bikes with drivetrains as low as 250 watts and as high as 1000 watts. Even on a 250-watt drivetrain, my 160-pound body could clear 25 MPH. And on bikes with 1000 W drivetrain I could get over 40 mph. The batter is a small canister that charges fast and lasts for 25 miles in full electric mode. While not quite the range of a fully electric car, the point is if the future design of passenger vehicles looks like pods they will be cheaper, need less battery size and capacity, use smaller drivetrains, and overall be more efficient.
The entire industry is moving to a manufacturing process to bring electric cars and add full autonomy to a form factor that I don’t believe will be as common on the road in the future, especially in urban areas. But all that investment and RND in manufacturing infrastructure for big cars will only pave the way for cheaper, more efficient smaller ones once shift from drivers to passengers fully takes place.
This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell chatting on Consumer Reports decision to no longer recommend Microsoft Surface devices, analyzing NVIDIA’s earnings, and discussing Google’s controversial diversity memo and the issues it has raised for Silicon Valley.
Consumer Reports based its decision on the results of an annual subscriber survey about the products such people own and use. It estimates Microsoft’s laptops and tablets will experience breakage rates of 25% within two years of ownership, loosely defined as any issue that comes up that prevents the computer from working as the owner expects. As a result,Consumer Reports added that it couldn’t currently recommend any other Microsoft laptops or tablets, including the latest Surface Pro model that was introduced in June.
As we approach September and the anticipated announcement of the next iPhone(s), speculation is running high about what game-changing new features will be offered — glass body, wireless charging, high quality screen, AR, and so on. But an under-addressed question is whether the Apple’s next phone will support all the new wireless spectrum that is being deployed. There has been a lot of action on the spectrum front: recently completed 600 MHz auctions; operators’ launch of new LTE bands; rollout of LTE Unlicensed; and the awarding of FirstNet. Not surprisingly, Apple and the operators have been mum on the wireless specs of the new device. Lots of ‘no comments’ in response to inquiries. But there are 3-4 important bands which the iPhone 8 (which we’ll call it for the sake of this column) will need to support in order to be competitive with the current state-of-the-art, and keep up with what the operators are planning to launch over the next year on the LTE and LTE Advanced roadmap.
Big question #1 is whether the iPhone 8 will support Band 66, also known as AWS-3 (2100 MHz). This was the big piece missing from the iPhone 7, and Apple received quite a bit of criticism for that omission. This is an important capacity band for AT&T and T-Mobile, especially (DISH also has spectrum here). Most competing flagship phones, such as the Galaxy S8 and LG G6 support this band. It would be a huge gapper if Apple didn’t support this, so I’d give this a 95% likelihood.
Next up is Band 71, and this one is likely to land in the ‘no’ category for the iPhone 8. Band 71 is 600 MHz spectrum band from the recently completed incentive auction. T-Mobile and AT&T were the big winners here, with DISH and Comcast also picking up healthy chunks of spectrum. Because of its dearth of low-band spectrum, T-Mobile is especially eager to deploy services in the 600 MHz band. The company has said it plans to have commercial operations in the 600 MHz band later this year, which is “when new 600 MHz smartphones from leading smartphone manufacturers are anticipated to arrive”, the carrier said in a June press release. We do expect some flagship devices supporting Band 71 to be made available by the end of the year, but I’m not betting on the iPhone 8. In part, that’s because Apple does not tend to support bands that have not been widely deployed. Additionally, Apple’s tilt toward Intel (and/or the use of multiple modem suppliers) would reduce the likelihood of 600 MHz support, since Intel’s latest chip does not support Band 71. So that would be a bit unfortunate for T-Mobile, especially since 600 MHz is a key part of its strategy to narrow the coverage gap with AT&T and Verizon, especially outside cities.
The next big question pertains to LTE Unlicensed. LTE-U provides additional speed and capacity using carrier aggregation in the 5 GHz (Wi-Fi) band, as part of LTE Advanced. T-Mobile announced LTE-U support in six cities in June, with more planned in the coming months. Verizon is also planning to launch LTE-U in 2017. LTE-U utilizes Bands 252/255. The Samsung Galaxy S8 is the one flagship phone currently available that supports LTE-U. To me, it’s a toss-up as to whether the iPhone 8 will support this band, since it’s still in the relatively early stages of deployment. Given that some current and planned competing phones support LTE-U, I’d put the likelihood at 50% or better.
Finally, there’s FirstNet, which is the LTE-based Public Safety Broadband Network in Band 14 of the 700 MHz spectrum. AT&T was awarded the contract for FirstNet earlier this year, and will likely start building the network in earnest in 2018. In addition to deploying 20 MHz for public safety agencies, AT&T will also have the ability to use 40 MHz of spectrum for commercial cellular services. The first group of devices to support FirstNet are likely to be more ruggedized, purpose-built phones, such as the currently available Lex F10 from Motorola. I’m not optimistic that the iPhone 8 will support this band.
The ability of the latest devices to take full advantage of cellular networks’ improved coverage, greater capacity, and faster speeds are as important as all the whiz-bang features promised with the current and anticipated crop of flagship phones. For example, T-Mobile has extolled the Samsung Galaxy S8 as one of the first phones capable of supporting so-called ‘Gigabit LTE’ phones, which is achieved through a combination of carrier aggregation, 4×4 MIMO, and 256 QAM. So for all those having all sorts AR and AI dreams about the next iPhone, let’s also hope that Apple will continue to support the state-of-the-art in cellular.
Reports from Bloomberg suggest Apple is working on designing a new piece of silicon specifically for AI or more likely Machine Learning. Anyone tracking semiconductor trends could predict this since nearly every company working on AI/ML is using or designing a companion chip like a dedicated ASIC or FPGA for their AI efforts. At a fundamental level, these dedicated companion processors that are programmed for specific tasks are better suited for a range of tasks and AI is one of them.
Facebook on Wednesday afternoon unveiled its Watch tab, which will be the new home for video viewing on Facebook and serve as a showcase for existing as well as new and exclusive video within the app. This effort follows several months’ availability of Facebook’s apps for various TV platforms, which have served as a test of sorts for the new in-app video tab. Facebook is clearly hoping that its big video push makes it more competitive against YouTube and allows it to both increase time spent in its apps and generate higher ad revenue, but there are significant risks to this pivot.
The Evolution of Facebook’s Video Strategy
Facebook’s strategy around video has evolved over the last few years in much the same way as its mobile strategy had to evolve around the time of its IPO. Back in early 2014, Facebook talked about video mostly in a passive way, discussing the rise of sharing of video on its service by users, and its expectations that this sharing would grow as smartphones became more capable and widespread. A year later, Facebook was actively talking about proactively building a platform for video, and in 2016 Mark Zuckerberg began talking about video as a third phase in a shift that had already seen the majority of Facebook content go from being text-based to image-based.
The last couple of years have seen an active investment by Facebook in not just tools for creators but in content itself, first around live video and more recently around produced video which would eventually end up in the Watch tab it announced this week. It has primed the pump by subsidizing the creation of content to populate that tab and increase the amount of high-quality content available on Facebook, while also creating new ways for video creators to monetize on its platform, starting with mid-roll ads. Now we’re seeing the creation of a place within the mobile app where the vast majority of users engage with Facebook which is explicitly devoted to video.
The Theory and the Risks
Facebook’s strategy here is fairly transparent: as consumption of content on Facebook has shifted from text to images to video, the content consumed has gone from being hosted on Facebook to being hosted elsewhere, notably YouTube. That, in turn, has meant that any ad revenue generated directly from the viewing of those videos has gone into Google’s coffers rather than Facebook’s. As such, it wants to shift that viewing and the associated ad revenue from YouTube to its own platform, much as its Instant Articles initiative has done that for news articles. In the process, it clearly hopes to increase time spent on content hosted on Facebook servers, and generate the higher CPMs that video ads command. That’s the theory.
However, there are a number of risks associated with this strategy, at least some of which stem from the decision to autoplay videos in the News Feed with the sound off. That, in turn, meant that ads could never run before videos as they do on YouTube, and mid-roll advertising was therefore the only viable option to monetize video on the platform. We’ve seen a push in that direction over recent months, and it’s the anecdotal evidence I’m seeing from that push that has me worried here. The chart below illustrates both the theory and the risks associated with this new video pivot:
As shown in the chart, the theory from the Facebook side is that total time spent will go up, and that the ads people see while watching video will generate higher CPMs. The risks are as follows:
The time people do spend will shift from the News Feed to the Watch tab
The nature of ads they will see will go from being native and non-interruptive to being non-native and extremely interruptive
Facebook will go from ad formats where it keeps essentially all the revenue to models where it has to pass along much of the revenue to content owners and therefore generate lower margins, as Mark Zuckerberg confirmed on the company’s recent earnings call.
All told, there’s a significant risk here that instead of people spending more time on Facebook, people try spending some time in the new Watch tab, which Facebook will no doubt promote heavily as it has with the Marketplace and other recently added tabs, and then be put off by the mid-roll ads which will run in the videos they see there. The few ads that people do see, meanwhile, will generate less margin for Facebook than the highly profitable ads they currently see in the News Feed. Instead of increasing time spent and ad revenue generated, Facebook could actually turn people off and end up with less revenue.
Autoplay Will Turn Out to be a Costly Unforced Error
I return here to the decision to run videos in an autoplay mode without sound, which massively increased engagement with videos and therefore served that purpose well, but made it impossible for Facebook and its content partners to monetize those videos in the way that other ad-supported online video is monetized. People simply aren’t used to watching video on either Facebook or YouTube which breaks partway through and shows ads, and Facebook only has itself to blame for limiting its options now that it’s ready to turn on monetization for video. The great irony is that Facebook is now turning sound on by default for these autoplay videos, eliminating arguably the most effective aspect of the format and in the process neutralizing any benefits it might have gained from it in the first place.
It’s still possible that Facebook may be able to work its way through what at this point looks like a really costly unforced error. Perhaps the new content available on Facebook will end up being so compelling that the digital natives who’ve grown up on YouTube videos will sit through the ads anyway, but a generation trained on pre-roll ads on YouTube and no ads at all on Netflix is likely to have a tough time with random ads in the middle of very YouTube-like videos on Facebook. And that may make the hard pivot towards video Facebook is about to embark on really tough.
Apple’s upcoming iPhone 8, or whatever they will call it, has been in the news a lot lately. Reports suggest that the company has had trouble getting Touch ID on the glass screen to work and instead may be moving to using the camera for ID and eye scans to replace the using a fingerprint for Touch ID.
I hope this is true because this is by far the easiest and, what I consider the best way, to authenticate a person when accessing their iPhone. Samsung has this feature on the new Samsung 8S, and it is a dream to use instead of the fingerprint reader on the back that is very awkward.
Last week I was in Korea to experience a new direct-lit LED Cinema Screen recently launched at a Lotte Cinema in Seoul. While I was there I had the chance to sit down with the President of Samsung Electronics Mobile Communications, D.J. Koh, and CMO Younghee Lee to talk broadly about Samsung’s future. The focus was on what they learned from the Galaxy Note7 (Note7) issues and how committed they are in regaining the trust of the millions of Samsung users out there.
What Happened Since the Galaxy Note7 Recall
Back in January, Samsung held a press conference in Korea detailing what caused the Note7 incidents as well as what steps Samsung was taking in making sure there would be no risks for the future. During the press conference, Mr. Koh and executives from UL, Exponent and TUV Rheinland, who lead an independent investigation into various aspects of the Note7 incidents explained that in both cases the short circuit was caused by a damage to the separator that keeps the positive and negative electrodes from meeting with the jellyroll. In the case of battery A, the tip of the negative electrode was incorrectly located in the curve of the device. In the case of battery B, the high-welding burrs 0n the positive electrode resulted in the penetration of the insulation tape causing direct contact of the positive tab with the negative electrode.
Since then, Samsung implemented multi-layer Safety Measures that improved safety standards for the materials in battery design, added brackets around battery protection and improved algorithms that regulate battery charging.
Samsung also collaborated with the MIT Technology Review on a white paper that was published last week. The report shared more insights into the new 8-Point Battery Safety Check Samsung started implementing with the Galaxy S8 and S8+.
There is no doubt that Samsung went far and beyond the call of duty with this new process, with a goal of sharing its finding with the broader industry. While initial communication around the recall could be faulted, the rigor of the investigation and the follow-up steps taken cannot. Samsung has tried to be very transparent about how it can work to avoid another incident and also how to be better prepared in case something might happen again. Catching the issue at production is almost as important as avoiding the issue in the first place.
Samsung Smartphone Owners are Ready for the Note8
No matter what Samsung says and does though, at the end of the day, what really matters is neither the reports reassuring everything is under control nor the press articles still referring to the Note7 explosions. What will make a difference to Samsung Galaxy Note8 sales rests in the confidence consumers still have in the brand.
This week, SurveyMonkey Audience released the results of a study they conducted among 1000 US consumers to gather their interest in the upcoming smartphone as well as their view of the Samsung brand.
I will focus my analysis on Samsung current owners vs. overall smartphones owners because the Note family has not been a mainstream device. Its large screen size and pen input were not for everyone and certainly not a device than in the past generated a lot of churn from iPhone.
So, how do current Samsung owners feel about the brand?
Brand loyalty remains strong, as current owners are either extremely likely (47%) or very likely (34%) to consider Samsung when it is time to replace their current device.
Awareness of the upcoming Note8 release is good across all smartphone users interviewed with only 38% saying they have not heard about it. As you would expect, awarenessamong Samsung current owners is much higher with 25% saying they like to keep up with the latest news relating to Samsung devices and another 46% saying they heard about the Note8 but they don’t know much about it.
When it comes to the most interesting features rumored to be coming with the new model, 70% of Samsung current owners are most interested in the phone being waterproof, 35% in the dual camera and another 35% in the Fingerprint scanner.
For current Samsung owners, the top three most appealing reasons they would consider a Note8 are Features (52%), Reliability (50%), and Large Screen Size (38%). These data points underline that the Note as a device family has been seen by Samsung users as the flagship product. The Note7, in particular, with its strong feature set really helped to broaden the appeal to a wider audience outside of the large screen lovers. Reliability, as the second most wanted feature, does not seem to signal much concern that what happened with the Note7 might repeat with its successor.
If this were not enough of an indicator, when current Samsung smartphone owners were asked if they would consider buying a Note8 in the aftermath of last year’s recall, 45% said yes and 37% says maybe leaving only 18% saying no. Among the rejecters, the strongest reasons for lack of consideration is the high cost (31%) while the issues with the Note7 impact intention for another 28%.
Interestingly, most concerns seem to vanish when cost is not an issue. When current Samsung smartphone owners who said they would not consider buying the Note8 were asked if they would use a Note8 if it were free, 66% said they would and another 27% said maybe while only 7% said they would not.
Note Owners are Samsung’s Fiercest Fans
When you look at some of the data I just shared, but narrow it down to Note7 owners, the loyalty to the Samsung brand and the passion for the product comes across very strongly. Although the base size in the sample is more limited compared to the other cohorts, I think this data gives a good indication of how this segment behaves.
Note7 owners still think very highly of Samsung with 37% saying they find the brand extremely reliable and another 43% saying it is very reliable. When it comes to their next phone 43% are extremely likely to consider a Samsung device and another 30% are very likely.
It will come as no surprise to see that only 20% of the current Note7 owners have not heard about the Note8. What is very interesting is to see that 63% of Note7 owners would consider buying a Note8 and another 24 would maybe consider it.
The Note7 recall did, however, shake its owners’ loyalty somewhat, as you would expect with all the publicity the incidents drew. When it comes to the three most appealing reasons for considering a Note8, features comes first at 52% and a better camera comes second at 59% followed by a large screen size at 43%. Reliability, that came second among overall Samsung owners, drops to fourth place at 33% among Note7 owners. This to me shows a somewhat coy stand on the Note brand but one that does not reflect on the overall brand and certainly not on the overall benefits users see with this device.
Ultimately sales will tell if Samsung is really over the Note7 incident but the signs leading up to the launch and the performance of the Galaxy S8 thus far seem to indicate that it is the case.
We are on the cusp of watching something we have been talking about for a while go mainstream in a big way. That thing is Augmented Reality. With Apple’s ARKit and from what I’m hearing a Google ARKit competitor for Android coming soon, these new tools will usher in a new wave of app development and innovation. I’ve stated my point that AR will go mainstream through the smartphone and eventually to other displays. It is the topic of what those other displays may be I want to address today.
For long-time tech industry observers, many of the primary concepts behind business-focused Internet of Things (IoT) feel kind of old. After all, people have been connecting PCs and other computing devices to industrial, manufacturing, and process equipment for decades.
But there are two key developments that give IoT a critically important new role: real-time analysis of sensor-based data, sometimes called “edge” computing, and the communication and transfer of that data up the computing value chain.
In fact, enterprise IoT (and even some consumer-focused applications) are bringing new relevance and vigor to the concept of distributed computing, where several types of workloads are spread throughout a connected chain of computing devices, from the endpoint, to the edge, to the data center, and, most typically, to the cloud. Some people have started referring to this type of effort as “fog computing.”
Critical to that entire process are the communications links between the various elements. Early on, and even now, many of those connections are still based on good-old wired Ethernet, but an increasing number are moving wireless. Within organizations, WiFi has grown to play a key role, but because many IoT applications are geographically dispersed, the most important link is proving to be wide-area wireless, such as cellular.
A few proprietary standards such as Sigfox and Lora, that leverage unlicensed radio spectrum (that is, unmanaged frequencies that any commercial or non-commercial entity can use without requiring a license) have arisen to address some specific needs and IoT applications. However, it turns out traditional cellular and LTE networks are well-suited to many IoT applications for several reasons, many of which are not well-known or understood.
First, in the often slower-moving world of industrial computing, there are still many live implementations of, along with relatively large usage of, 2G networks. Yes, 2G. The reason is that many IoT applications generate tiny amounts of data and aren’t particularly time-sensitive, so the older, slower, cheaper networks still work.
Many telcos, however, are in the midst of upgrading their networks for faster versions of 4G LTE and preparing for 5G. As part of that process, many are shutting down their 2G networks so that they can reclaim the radio frequencies previously used for 2G in their faster 4G and 5G networks. Being able to transition from those 2G to later cellular technologies, however, is a practical, real-world requirement.
Second, there’s been a great deal of focus by larger operators and technology providers, such as Ericsson and Qualcomm, on creating low-cost and, most importantly, low power wide area networks that can address the connectivity and data requirements of IoT applications, such as smart metering, connected wearables, asset tracking and industrial sensors, but within a modern network environment.
The two most well-known efforts are LTE Cat M1 (sometimes also called eMTC) and LTE Cat NB1 (sometimes also called NB-IoT or Narrowband IoT), both of which were codified by telecom industry association 3GPP (3rd Generation Partnership Project) as part of what they call their Release 13 set of specifications. Cat M1 and NB1 are collectively referred to as LTE IoT.
Essentially, LTE IoT is part of the well-known and widely deployed LTE network standard (part of the 4G spec—if you’re keeping track) and provide two different speeds and power requirements for different types of IoT applications. Cat M1 demands more power, but also supports basic voice calls and data transfer rates up to 1 Mbps, versus no voice and 250 kbps for NB1. On the power side, despite the different requirements, both Cat M1 and NB1 devices can run on a single battery for up to 10 years—a critical capability for IoT applications that leverage sensors in remote locations.
Even better, these two can be deployed alongside existing 4G networks with some software-based upgrades of existing cellular infrastructure. This is critically important for carriers, because it significantly reduces the cost of adding these technologies to their networks, making it much more likely they will do so. In the U.S., both AT&T and Verizon already offer nationwide LTE Cat M1 coverage, while T-Mobile recently completed NB1 tests on a live commercial network. Worldwide, the list is growing quickly with over 20 operators committed to LTE IoT.
In fact, it turns out both M1 and NB1 variants of LTE IoT can be run at the same time on existing cellular networks. In addition, if carriers choose to, they can start by deploying just one of the technologies and then either add or transition to the other. This point hasn’t been very clear to many in the industry because several major telcos have publicly spoken about deploying one technology or the other for IoT applications, implying that they chose one over the other. The truth is, the two network types are complementary and many operators can and will use both.
Of course, to take advantage of that flexibility, organizations also require devices that can connect to these various networks and, in some cases, be upgraded to move from one type of network connection to another. Though not widely known, Qualcomm recently introduced a global multimode modem specifically for IoT devices called the MDM9206 that not only supports both Cat M1 and Cat NB1, but even eGPRS connections for 2G networks. Plus, it includes the ability to be remotely upgraded or switched as IoT applications and network infrastructures evolve.
Like many core technologies, the world of communications between the billions of devices that are eventually expected to be part of the Internet of Things can be extremely complicated. Nevertheless, it’s important to clear up potential confusions over what kind of networks we can expect to see used across our range of connected devices. It turns out, those connections may be a bit easier than we thought.
Every quarter, one of the decks I put together as part of my Jackdaw Research Quarterly Decks Service is a comparison of financial and operating metrics for the “big six” consumer tech companies – Alphabet, Amazon, Apple, Facebook, Microsoft, and Samsung. As I’ve done for several previous quarters, I’m also doing a quick run-through here of some of the highlights from this quarter’s deck. And you’ll find the full deck embedded at the bottom of this post. You can learn more about the Jackdaw Research Quarterly Decks Service here.
Before we dive into individual charts, it’s worth noting some trends and themes that will provide useful context for this quarter’s comparisons:
Samsung had its best quarter ever, with record revenues and profits, driven largely by its semiconductor business, but also to an extent by the slightly later than usual launch of its new flagship smartphones
Microsoft closed its acquisition of LinkedIn in Q4 2016, and the employees and revenues associated with that business now show up in its reported results
Facebook has been warning of a slowdown in ad revenue in 2017 since late last year, due to the saturation of ad load within the News Feed, where most of its ads show up, but there’s so far little sign of it, though it says it should show up in H2 2017. Meanwhile, it’s begun investing more heavily in video content and is driving a shift to video advertising, which will have lower margins than its traditional advertising
Amazon appears to be entering another phase of higher investment in pursuit of growth in its e-commerce, AWS, and even advertising businesses, hiring rapidly and raising capital expenditures significantly over the past year, and that is now flowing through to lower profits
Apple is in the middle of a return to growth, after three quarters of decline, and that growth appears to be accelerating a little ahead of this fall’s widely reported launch of its first dramatically different iPhones in years
Alphabet is in some ways the most stable of these businesses, providing fairly predictable revenue growth and margins, though it’s also undergoing a big push around enterprise cloud services while reducing capital expenditures and cutting costs in its Other Bets business.
With that out of the way, let’s get on to some of the key comparisons.
Revenues and Revenue Growth
As it did last year, Samsung came out on top in terms of revenue in Q2, beating out Apple in its lowest quarter of the year, thanks to its strong semiconductor and to a lesser extent smartphone growth. On an annualized basis, though, Samsung remains far behind Apple, where it’s been ever since its fall from its peak in 2014. Meanwhile, Amazon continues to rapidly close the gap with the top two on an annualized basis, and had by far the strongest dollar growth year on year of any of the six companies, a position it’s held ever since Apple began to come down from the massive growth cycle it experienced in 2015 thanks to the iPhone 6. Facebook, meanwhile, remains by far the smallest of the six in revenue terms, though its year on year dollar growth matches that of Samsung, and its percentage growth rate year on year outstrips all the others by a considerable margin. All of these results are shown in the charts below, which you can magnify by clicking on them.
When it comes to net profits, Samsung came out top this quarter, for the first time since its earlier peak in 2013, with Apple out in front in the interim. However, it’s worth noting that this doesn’t make Samsung “the most profitable tech company” as I’ve seen numerous headlines suggest: as with revenue, on an annual basis Apple remains far out ahead, and this quarter’s result was a quirk of a particularly strong quarter from Samsung in what’s seasonally Apple’s lowest quarter. Meanwhile, Facebook continues to close the gap with Microsoft and Alphabet in dollar terms, reaching nearly $4 billion of net profit in the quarter, and it closed the gap significantly in operating profit dollars. On net margin, Facebook remains way out ahead, with what has been a largely unbroken rise over the last couple of years to over 40% net margins. Microsoft came in second in margins this quarter off the back of a tax gain, while it and Alphabet were essentially tied in dollar terms. Amazon, meanwhile, saw a dip to its lowest net margin in two years.
Investment: Capex, R&D, and People
We’ll look lastly at three measures of investment: capital expenditures, research and development spending, and hiring. On the capex side, Samsung has dramatically increased its investment recently, with much of the investment going into its semiconductor business to support capacity increases to meet recent high demand. Samsung’s overall capital intensity is now nearly 20%, but in its semiconductor business it was over 40% in Q2, a level almost unheard of in such a mature business. Samsung is now ahead of the pack in both dollar spending, by quite some distance, and in capital intensity as well. Facebook, too, spends highly as a percentage of revenue on capex, investing heavily in data centers in particular. Apple’s capex continues to fluctuate widely from quarter to quarter based on the timing of specific investments across the year, and tends to manage its capex budget on an annual rather than quarterly basis as a result.
When it comes to R&D spending, the picture looks rather different. There, it’s Alphabet that’s out ahead in dollar terms, with a rapid rise over the last few years, despite a brief slowdown in late 2016. Much of Alphabet’s Other Bets segment is in areas where there’s high R&D spend and little revenue to show yet, so this likely contributes significantly to overall R&D spending, but it also spends heavily in its core business. Samsung and Microsoft spend at very similar levels, with Apple just a little behind in dollar terms, and Facebook spends by far the least, though its spending is also rising rapidly. On a percentage basis, however, it’s again Facebook that leads the pack, spending over 20% of its revenue on R&D, with much of that spending driven by high stock based compensation costs in some of its acquired businesses like Oculus. Apple, meanwhile, has increased its share of revenue that goes to R&D from 2.5% to 5% over the past few years, but still lags considerably behind almost all the others in percentage spend due to its sheer size.
Lastly, when it comes to hiring, Amazon continues to be in a league of it own, and now employs nearly 400,000 people globally, with over 100,000 of those added in the past year (with the increase roughly equivalent to either Apple or Microsoft’s total headcount a year ago). And of course the composition of these companies’ workforces is very different, with many of those Amazon employees being warehouse and fulfillment center workers, while most of those at its competitors work in engineering roles, with the exception of Apple’s large retail workforce.
That has implications for revenue per employee, where Amazon continues to be the lowest, and Apple remains just in front of Facebook. However, Amazon said on its most recent earnings call that an increasing proportion of those it’s hired lately have been sales people for its AWS and advertising businesses, with the latter a fast-growing but often overlooked part of Amazon’s overall operation. Facebook managed to hire more people than Alphabet this past quarter as it accelerates its own investment in sales and other parts of its workforce, and both of those companies are predicting a rapid rise in the second half. Apple, meanwhile, slowed hiring as its revenues fell, and although there have been some signals of a rise recently, we’ll have to wait for its 10-K for formal confirmation. Microsoft, meanwhile, added roughly 10,000 employees when it acquired LinkedIn, but has also made layoffs in its sales organization twice in the past year or so, and continues to shrink its manufacturing headcount following the decline of its phone hardware business.
There have been many stories written recently about Facebook’s CEO Mark Zuckerberg doing a tour of America to try and find out what people all over the US are thinking and are concerned with these days. He called it a fact finding trip and stated it had no political focus. But according to an article in Politico, Zuckerberg recently “hired a Democratic pollster, Joel Benenson, a former top adviser to President Barack Obama and the chief strategist for Hillary Clinton’s failed 2016 presidential campaign, as a consultant, according to a person familiar with the hire. Benenson’s company, Benenson Strategy Group, will be conducting research for the Chan Zuckerberg Initiative, the couple’s philanthropy.”
While Zuckerberg denies political ambition, the belief here in Silicon Valley is that he is thinking more seriously of some type of political run or campaign that he could launch in the near future or at least understand how he can be more influential in guiding US policy when it comes to the potential impact that technology will play in America’s future over the next 30 years.
There is some interesting history of this type of Silicon Valley political activity and I wrote about this for Fast Company last fall. Here is a passage that explains the Valley’s early interest and influence on Washington:
“During my 35 years of covering the technology industry, I have seen firsthand how companies have tried to keep an arm’s-length relationship with the government. With some rare exceptions—the Pentagon’s cooperation and collaboration with industry brought us the internet—Silicon Valley has generally tried to avoid federal and state bureaucrats. After all, the less the government knew about what tech companies were doing, the fewer legal and legislative issues the industry would have to deal with. This dynamic no longer works.
In the mid 1990s, a group of technology heavyweights led by Cisco’s then-CEO, John Chambers, and Kleiner Perkins venture capital firm partner John Doerr, along with various other tech leaders, began to realize the Valley would need the partnership of government and politicians for their vision of the future to be realized to the fullest.
Chambers and Doerr et al also foresaw the dramatic impact that the internet and mobile technologies would have on the U.S. and the world. Already back then, Chambers was percolating his ideas of connected cities and the Internet of Things (IoT).
These executives began evangelizing these concepts within the Clinton administration and at the federal agency level. They made an effort to educate elected officials on how technology would impact every level of government, and how it would transform our cities, businesses, and system of education.
To their credit, Clinton and Vice President Al Gore understood what Chambers and Doerr were saying. Clinton and Gore opened lots of doors for the tech leaders in Washington, giving them a chance to share their vision of the future.
At the end of the Clinton era, when Al Gore battled George W. Bush for the presidency, Chambers, Doerr, and other Silicon Valley leaders wisely kept up their efforts to influence both candidates. It became clear that whoever became president would follow President Clinton’s lead and allow Silicon Valley leaders to continue pushing the tech agenda.”
The heart of this recent interest in the tech world getting more involved in politics by either running for office or finding new ways to influence our politicians is the even greater understanding today of the impact of tech on our worlds future and how it could dramatically change American education, jobs, businesses and our personal lives over the next 30+ years.
“With 5G, it will begin connecting people to devices, and devices to other devices. The latter is called the Internet of Things, and it’s primed to profoundly change our lives, much the way the regular Internet has. It’s also a potentially huge source of growth — Cisco estimates IoT gear and software will become a $14 trillion market over the next decade.
5G isn’t the only innovation on the horizon. Connected and autonomous cars will hit the streets in the next decade. In combination with the IoT, they’ll “speak” to one another and to public infrastructure, helping us build smarter cities. Tech companies will roll out new ways to track our health, connecting us to our doctors to help us stay healthy. Artificial intelligence will be applied to just about everything that technology already touches. Digital security will become an even more vital issue, as businesses and individuals will be increasingly targeted by hackers. The very nature of computers will change, too, as virtual and augmented reality will be established as the new interface of computing, delivering new forms of utility and entertainment.”
I also add to this AR, VR, Machine Learning, Robotics in manufacturing and new advances in medical science and you see that technology is on course to disrupt just about everything that is around us today and well into our future.
“However, for all these innovations to thrive — and deliver potentially huge economic benefits — they will need the help of our elected officials. Lawmakers need to understand these technologies, as they will be called upon to craft new laws and regulations to bring these technologies about smartly and safely.
Therein lies a problem. If you look at our lawmakers across the country, I would venture to guess that most are not very technologically savvy. For our country to truly enjoy the benefits of these new technologies, we’ll need politicians and officials who understand how these innovations work, and how they stand to change our lives.”
Tech execs who understand the role of technology on our future as well as its impact on things like education, the future of manufacturing, world of finance, etc look at our current president and some members of congress and see almost no understanding or vision of what a crucial time we are in our history. When it comes to the role technology will play in impacting every aspect of our business and personal lives and our culture going forward, their lack of tech savviness that will keep America from advancing and allow countries like China, Canada and France and others, who’s leaders embrace technology rather then dismiss it, from potentially leaving us in the dust. Even worse, some of congressional leaders sees tech and science as a detriment to their political goals and have become obstructors instead of visionary backers.
That is why some high powered tech leaders are thinking the unthinkable these days. Many tech execs that I know hate and do not trust our government but are starting to come to the conclusion that a President, Senator and Congressmen and Congresswomen needs to have a greater grasp of how technology will shape our world and country and be tech savvy enough to keep America moving forward now. I am told behind the scenes that some very high powered, forward thinking tech execs who really understand how technology is going to drive so many major things tied to America’s growth and world position are starting to contemplate running for office in many states around America. Their goal would be to gain a stronger position of influence when it comes to the role government must play in guiding how technology is applied and integrated into all of our business and personal lives fairly and equally.
I have no clue whether Zuckerberg will or will not eventually move into politics but I am willing to bet that as more and more tech execs understand the magnitude of what has to be called the great tech revolution of this century, we will see some of them trying to find a greater way to influence our current politicians and even see some begin to run for office in order to influence our government from within as much as possible.
This week’s Tech.pinions podcast features Ben Bajarin, Jan Dawson and Bob O’Donnell discussing graphics and AI-related announcements from AMD and nVIDIA made at the SIGGRAPH convention, and the earnings reports from Apple and Tesla.
Consumer uptake of virtual reality might be taking longer than some pundits expected, but the technology is finding robust traction on the commercial side of things. In fact, IDC recently published a Worldwide AR/VR Spending Guide report that predicts commercial spending on hardware, software, and services related to virtual reality will surpass consumer spending on the technology this year. What makes this particularly interesting is that this commercial growth is taking off despite the dearth of commercial-focused hardware in the market.
Strong Uptake Across Numerous Verticals May of the challenges faces VR in the consumer market, such as the high cost of hardware, the complexity of setup, and the lack of mainstream content, aren’t major issues when it comes to commercial deployments of the technology. And across many different verticals and use cases, the benefits are obvious, and the potential return on investment is clear. IDC’s research on the topic to date has explored VR in 12 different industries and across 26 different use cases. And remember: it is still early days.
Some of the most compelling industry use cases include:
Retail: Perhaps the most-cited use case for VR is in high-end automobile showrooms, where potential buyers can view a much wider range of car interiors and options in VR than any dealer could ever stock on the lot. In the future, you can imagine moving beyond simply kicking the tires to being able to drive the car on a virtual track. Retail use cases will expand across all types of products, and may well become one of the ways traditional brick and mortar retailers find to compete with online stores.
Education/Knowledge Transfer: From training firefighter to soldiers to educating engineers and school kids, VR is going to drive dramatic shifts in how people learn in the future. In the first scenario, people receive training in situations too dangerous and expensive to simulate in the real world. In the second, students gain access to brand new ways of interacting and absorbing information that is less passive and more active.
Manufacturing: VR is already taking off in both process and discrete manufacturing. The use cases are as varied as the collaborative, iterative process of creating products, to the training of engineers and others on how to run massive and complex manufacturing lines. The potential for VR disrupt age-old manufacturing processes—especially when combined with 3D printing—is massive.
Healthcare: VR will impact both the practitioners of medical care and those receiving it. On the practicing side, VR will help new doctors learn, and existing doctors see issues in new ways, pre-visualize complex procedures, and gather second opinions from remote colleagues. For patients, VR will offer the ability to better understand what’s going on in their bodies, as well as a wide range of treatment options for mental health issues.
Construction: VR is already in use in major construction projects around the world. From initial designs to construction, project pitches to project management, VR is enabling companies to make better buildings and to do it faster. And by pre-visualizing the exterior and interior of a building, the construction company can cut down on costly mistakes, while also allowing the building’s owner to make tweaks during construction, and not after. Eliminating skylights, changing finishes, and moving doors are much less costly if such changes are made before installation and not after.
Growth Despite Key Challenges IDC has forecast robust growth in all the above areas, as well as a long list of others. And this growth is occurring even though a great deal of the early work here is happening on the consumer-grade hardware that’s available in the market today. Suffice to say, products designed for use by consumers aren’t rugged enough for long-term deployments in commercial settings. This lack of commercial-focused VR hardware is a clear market need the industry has failed to address so far. Later this year I expect the launch of standalone VR products—based on reference designs from Intel and Qualcomm—to gain more traction in commercial than in consumer. While most consumers have limited need of a VR-only device, companies looking to deploy VR will find the simplicity quite appealing, especially after vendors start building robust, commercial-grade versions.
As the number of hardware options increase and more commercial-centric designs hit the market, the software and services associated with the technology will improve, too. We should also start to see the emergence of more VR standards, which will be key for long-term growth. And in the span of a few short years VR will be quite well entrenched in many of these vertical markets. This represents a large opportunity for the technology companies that service these markets, and an outsized threat to those industry verticals that fail to embrace the technology in a timely manner.
As we’re nearing the end of earnings season, one of the things that’s become increasingly clear is that the big companies have mostly performed consistently with their past performance, delivering strong growth and profits. Meanwhile, smaller companies have struggled to find growth and profitability, often losing share to their bigger competitors. The recurring theme for me has been that the dominant become ever more dominant, while the smaller players continue to struggle to break in and cross over to the other side of what’s increasingly looking like a chasm.
The Big Get Bigger
On the big side, we have the giants of the consumer tech industry, as measured by revenue or by influence, all of which have now reported their results:
Alphabet reported 21% revenue growth and a 24% net margin
Amazon reported 25% revenue growth and, an exception among this group, a 0.5% net margin
Apple reported 7% revenue growth and a 21% net margin
Facebook reported 45% revenue growth and a 39% net margin
Microsoft reported 13% revenue growth (aided somewhat by LinkedIn) and a 24% net margin
Samsung reported 20% revenue growth and a 14% net margin, with its strongest quarter for both revenues and profits ever.
With one exception, these are highly profitable companies, and with one exception again, they grew by double digits year on year. Beyond the mere financials, though, these companies individually or in pairs or trios dominate key markets:
Amazon is increasingly dominant in e-commerce, not taking majority share of spend, but certainly eating up the vast majority of growth in the market
Alphabet’s Google and Apple carve up the smartphone operating system market between them
Facebook dominates social networking, with no viable competitors on a global basis
Amazon, Google, and Microsoft dominate cloud computing
Samsung and Apple remain the top two players in the smartphone market, and the only ones to make more than minimal profits in the market
Microsoft continues to dominate the PC OS market, with Apple and Google rounding out the rest
Facebook and Google dominate online advertising, and especially mobile advertising, and as with Amazon soak up much of the annual growth in the market in the US
Amazon and Google dominate the home speaker market between them, with Amazon having the lion’s share
Apple dominates the premium smartphone and tablet markets.
I could go on, but you get the picture: these big, successful companies are only becoming bigger and more successful, and more dominant in the various markets where they compete.
The Small Continue to Struggle
Not all of the smaller consumer tech companies have reported yet, but we have enough of a picture from those that have, and from past earnings from those that haven’t, to know what we’ll end up with:
Twitter’s revenue declined for the second straight quarter, down 5% year on year, and continues to lose buckets of money, with a negative net margin of 15% in Q2
Fitbit saw continued rapid declines year on year in its shipments and revenue, and also saw significant losses
GoPro will be reporting on Thursday, and my guess based on last quarter’s results is that losses and revenue declines are on the cards there too
Snap reports next week, and although it’s seen decent growth driven by rising ARPU, its user growth continues to struggle and it’s also losing lots of money
Smartphone vendors from LG to HTC to Lenovo either have reported or will likely report losses and shrinkage, with few exceptions.
Now, some of this is down to company lifecycles, with both Twitter and Snap still to generate any profits at all in any quarter, while Fitbit and GoPro have been profitable and high-growth companies in the past but have run into trouble. The lower-tier smartphone vendors, meanwhile, have always struggled in markets that offer little differentiation and intense competition and which heavily reward scale and premium offerings.
Barriers to Independent Success Remain High
In an earlier piece, I wrote about the danger of being a one-trick pony in the tech industry, with both Fitbit and GoPro among the examples I cited. And that remains a key issue for these companies, many of which are single-product companies and have failed to build broader platforms and ecosystems that can attract consumers and differentiate against powerful competitors.
But the barriers to success go well beyond that. Many of the largest players in the industry enjoy significant network effects and scale which enable them to quickly ramp up new products and services by selling them to massive installed bases of devices or regular users. I wrote about the power of Amazon’s Prime in this regard last week, but Facebook is another great example. If Facebook feels threatened by a new app or feature offered by a rival, all it has to do is copy it and make it available to its own massive user base of 2 billion monthly active users or the smaller but still substantial WhatsApp, Messenger, and Instagram user bases. The rise of Instagram Stories to 250 million daily active users over the past year, eclipsing Snapchat’s 166 million daily active users as of the end of Q1 2017, is perhaps the perfect example of this.
On the rare occasions when companies and products do manage to break through these barriers and create real differentiated value, they’re often simply acquired by the bigger players. WhatsApp, Instagram, LinkedIn, DeepMind, and others are among the list of companies which had created interesting businesses or technologies outside of the big tech companies and yet have now ended up being absorbed by them.
For all these reasons, it’s almost impossible to cite an example of a consumer technology company that’s emerged in the last few years and achieved real financial success independently of and despite the dominance of the big tech companies. Of those that have tried, the vast majority have run up against the power of ecosystems, been cloned and eclipsed by the big companies, created markets which ended up dominated by larger players, or been acquired.
I’m far from convinced, as some are, that this means regulators should start looking at these companies on antitrust grounds, mostly because I don’t think they’re doing anything illegal. But we are going to see calls for regulatory intervention, especially in Europe and other markets outside the US, and we’re going to see an increasing backlash against these dominant players from consumer groups, would-be competitors, and politicians. Dealing effectively with these complaints and threats is going to be an important skill for these companies over the coming years, even as they begin to feel more and more invincible.