One of the first articles/analyses I ever published on Tech.pinions was aptly titled Why Apple Has a Strong Competitive Advantage. I’m linking to this article, but I encourage you not to read it as my writing skills have greatly improved, and when I go back and read it..it is a little painful. Nevertheless, that article remains the most read article on Tech.pinions to this day and still generates significant monthly views because of people searching the term “Apple’s competitive advantage” via search engines. Meaning, this is clearly still a topic many are interested in.
I’m not going to go back through my points, but I will list the core pillars because I still believe they apply to Apple’s competitive advantage today. The core pillars of Apple’s advantage I outlined in my essay were:
Apple’s Hardware + Software
- iTunes & Digital Asset Management (what turned into the services businesses)
- Apple’s Retail Strategy
This piece was written in 2011, and Apple as a company has matured greatly. What I outlined for iTunes/digital was the early seeds planted for Apple’s services business, which is absolutely a key part of their differentiation and advantage. Other things I would include today are things like Apple being a functional organization (one PNL instead of competing business units) and their hyper-focus on customer experience as a culture and philosophy. One could argue these are ingredients of their advantage more than pillars, like their integration of hardware and software is or retail, and I could agree with that. But if I were writing that article today, I would add two new areas that I feel are undoubtedly pillars of Apple’s competitive advantage.
My belief that Apple’s investment in custom silicon is a pillar of differentiation won’t shock many of you since I cover this subject extensively. However, in today’s computing economy, I would argue that Apple’s efforts in Silicon are the underlying foundation on which ALL of Apple’s differentiation is built. Meaning, every other pillar of differentiation and competitive advantage is made possible because of Apple Silicon.
I’m not saying Apple would not be as successful if they never started making their own silicon, although I am quite confident the lead they have over multiple competitors would not be nearly as significant if Apple shipped the same silicon components their competition does. In essence, their differentiation and advantage would likely still exist, and it would just not be as strong.
Apple’s investment in silicon brings them many advantages but first and foremost is the custom tuning of components to hardware, software, and services vision. Apple has the luxury of roadmap planning in lockstep with hardware, software, and silicon engineering, and this is a luxury they have that none of their competitors do.
It could be easy to say their efforts in silicon are just part of their integration strategy. And that is true, however, I contend it is the core of their integration strategy. I’ve long said the famous quote from Alan Kay that “people who are really serious about software should make their own hardware” should be revised to say “people who are really serious about software, and hardware, should make their own silicon.” I think if Steve Jobs were around today, he would be ok with that revision as an orientating way that Apple thinks about integration.
In looking at structural competitive advantages, we look for things that the company we are analyzing is uniquely equipped to do that competitors are not, or at least not in the same way. This is why I would add Apple’s efforts in privacy as a pillar of their competitive advantage.
Apple’s business model is a key reason they aren’t able to harvest user data for economic gain. I know this has become a topic of debate lately since Apple has been clear they do collect some data to improve their products and services for their users. However, there is a difference between observing some of your habits for other people vs. observing some of your habits to make your experience better, and Apple falls more into the latter than the prior. I’m still on the fence on a few areas with Apple’s advertising pushes, but that’s for a different analysis.
Ultimately, the point I want to make here is people trust Apple, and it is becoming clear their trust Apple with their data and their sensitive data. I’m not saying people don’t trust other companies, but I am saying that if you asked a random person on the street what technology companies they trust their most sensitive and private information to, it would be an extremely short list.
People have proven to trust Apple with their credit cards, location data, family location data and information, medical records, health information, and more. Of course, Apple didn’t get there overnight, and its efforts to protect consumer privacy have been around for a while. But making this a point of marketing and doubling down on privacy will award them some advantages that other companies will not have.
The main one that comes to my mind is around Apple Watch. Apple is by far the leader in wearable consumer devices, and the advancements made to Apple Watch every year only go deeper into a consumer’s health and well-being. However, they could not do this if they didn’t have a base of trust, and in some cases, are still working to earn the trust of their customer base.
Leveraging the Advantage
With those two additions to the pillars of Apple’s competitive advantage, I want to look forward to future products and industries Apple can move into. When we think about going deeper into health/healthcare, more personal and intimate wearables, computers like glasses and beyond, and even automotive where our lives are at stake, Apple Silicon and Apple’s privacy stance become fundamental advantages that will allow them to go into markets competitors can’t.
It is easy to see the whole picture now, but seeing how far back Apple has clearly been planning and deepening their advantage with the pillars of silicon and privacy as competitive advantages, shows us just how far down the road Apple thinks strategically.
One of the more interesting conversations to be happening around the tech industry currently is what kind of a cycle we are in? In the last few years, I’ve noticed an interesting tension that emerged because, as a whole, the tech industry has just come off a period of significant rapid invention and innovation. From the mid-2000s until the mid-2010s, we saw an incredible innovation cycle around computing where over half the planet became connected to the internet via a primary computing device. Unfortunately, humanity has no real historical precedence for the rapid innovation and adoption of computing devices into the hands of more than 3 billion people in a span of fewer than 10 years. And this tension that is emerging is more of one of the expectations than anything else.
When your most present memory is a decade of innovation around PCs, smartphones, tablets, and wearable technology, once that innovation slows down, it can create the perception that tech is no longer innovative, or it is mature, or as I’ve heard it often said the tech industry is “boring.” I made several points years ago discussing this that the pace of innovation of the last decade simply was not sustainable, and we can’t expect that to the norm.
What is happening is actually what many predicted would happen when the first tech boom hit in the late ’90s. I wrote about this, what many call the boom, bust, buildout theory. Nearly every mega-industry since the industrial age saw this pattern where there is a boom equaling a flood of capital to a market, followed by a bust due to overflooding of capital to a market, and that is followed by a long sustained period of building out the industry via innovation and adoption.
Having studied consumer adoption of technology for over 20 years now, the way I think about this cycle is the boom, and bust part of the market helps to enable product creation due to its flood of capital followed by falling prices due to the bust. This enables new entrants to innovate and also provide solutions at costs that the mainstream can then afford. Sometimes this happens quickly, and sometimes it takes a while, but the buildout part of this phase is the mass adoption of the solution into the general public.
While it may seem boring, we are still in the buildout phase as a tech industry. What makes this cycle different than others is the role the Internet plays. The Internet is the backbone that makes computing relevant. If all our hardware was not connected to the Internet, it would be largely useless. And unlike things like boats, railways, engines, and other technologies where this pattern has emerged, the Internet is dynamic, not static. The dynamic nature of the Internet is what will enable the continued innovation of our digital devices. What this does, is causes the build-out cycle to be longer and larger than any other previous mega-industry.
The Next Big Thing
In the framing I just provided, my thinking has clarified this question around what is the next big thing. Of course, we can say something easy like augmented reality, virtual reality, robotics, or automation, but the reality is the next big thing is the evolution of the Internet. Because the truth is, all this interesting hardware we like to envision is simply just a new way to interact with the Internet. We interact with the internet today on 2D flat screens, but in the future, we will Interact with it via 3D displays, and with robotics and AI, the internet will start to interact with us.
The rapid period of innovation of the last decade was the creation of hardware to interact with the Internet version 1.0 or 2.0, according to some people. That manifests itself with web browsing and apps. Internet 3.0 will consist of more AI, augmentation, and automation, and the devices we experience the Internet with will look and feel much different than today. But, I find thinking about the future helpful if we start with the evolution of the Internet around cloud, AI/ML, automation, etc. First, think about devices vs. think about devices and then the Internet’s evolution. Simply because the Internet’s evolution (what is capable on the backend) will dictate what is capable within the hardware we create to experience it.
But, all of this means we are still a long way off from mass adoption of topics that get people excited today. Augmented reality is a 10+ year journey to mass adoption. Automation, specifically self-driving cars, is a 10+ year journey to mass adoption. We haven’t even scratched the surface of AI and robotics yet, which is more like a 20+ year journey.
This buildout phase is important, and it doesn’t mean there isn’t still innovation ahead. But, it won’t be as exciting as the last 10 years in terms of hardware invention.
This article is exclusively for subscribers to the Think.Tank.
In the last few months, I have had some very interesting conversations with executives knee-deep dealing with the supply chain shortages for their company’s procurement. The narrative about the shortage of chips has concentrated on the leading edge manufacturing node, meaning 7nm, 5nm, etc. But from the conversations I have had, this is not the biggest issue impacting our industry shortage, nor is it their worry about where semiconductor manufacturers are investing their money. It turns out, the semiconductor’s biggest issue is at the trailing edge.
The Trailing Edge and Legacy Nodes
The leading edge gets all the attention because it is the most exciting. The leading edge powers the supercomputers in the cloud, our desks and laps, and our pockets. But computing devices are not just made up of leading-edge microprocessors. The vast majority of other components are made up of legacy nodes and quite often many chips on the trailing edge.
What became clear in these conversations is most of the fabs in the media like TSMC, Samsung, Intel, and even Global Foundries, to a degree, are not relevant in the trailing edge semiconductor manufacturing process. Most of the companies making these chips largely on 90nm process and larger are located in China. Generally, these chips are a commodity, which is why most big fabs do not invest much in the trailing edge. Yet, most modern digital devices run at least a handful of chips built on the trailing edge. So while not nearly as sexy as the leading edge, chips made on the legacy processes are still as important to the manufacture of computing devices.
No End in Sight
Sadly, this likely means there is no end in sight for the capacity shortage. The fear alone of semiconductor drought has caused most of the big technology firms to stockpile both chips and POs with suppliers. Unfortunately, this only exacerbates the demand and delay for the foreseeable future.
In discussions on the matter, even an easing of demand is not necessarily light at the end of the tunnel. This situation has highlighted the steep challenge of predicting demand and the fine line companies procurement divisions have to walk to manage such a diverse supply chain of components. However, there are worries that companies are now hyper-aware of the delicate supply chain balance and may move forward with a new strategy and philosophy of procurement that includes more advance purchasing and volume guarantees.
If, in the end, supply chain management and procurement logistics go through a strategy and philosophy change, it could mean a prolonged challenge to get core components in a timely fashion. The discussions of strategy happening in the supply chain are uncharted waters, which is fascinating in its own right.
A Point on Semiconductor Supply Chain Nationalization
I have written in the past the role the semiconductor supply chain plays in a national security discussion for a nation-state. This was the basis of my thesis around the US needing to invest more in semiconductor manufacturing for both the competitive need of its companies and its own national security. And while that is true and needed, it is likely entirely impossible.
Even if there was a leading node manufacturer owned by the US company and operated on US soil, said the company would still be subject to a global supply chain of parts required to make silicon. For example, things like wafers, lithography machines, etc., are generally purchased from companies outside of the US.
I make this point simply to say that for all the points we make about the need for local manufacturing of Silicon, the reality is we can’t escape the global supply chain of semiconductor manufacturing that make it near impossible for a nation like the US to have every aspect of that supply chain operations within its borders.
Our takeaway, then, is first and foremost that this semiconductor supply chain shortage has no end in sight. Second, manufacturers of mass-scale technology are securing their orders in bulk and creating a wait-in-line scenario for everyone else, exasperating the shortage. And lastly, even an emphasis on domestic manufacturing of silicon can’t alone solve this problem for companies that operate on its soil.
The only thing that keeps surprising me about Apple is that people keep being surprised by their ability to keep delivering great quarterly revenues. But, of course, I know the point I’m making is more of an industry and investor one since most of Apple’s customers pay no attention to their earnings.
I have participated in countless investor debates on the sustainability of Apple’s business. While most investors have come around to the resiliency of Apple’s business and adjusted their models accordingly, there is still a general sustainable growth concern.
I find most fascinating, and perhaps the biggest reason I’m confident in Apple’s sustained growth is how the growth contributions can come from unexpected areas. One quarter maybe iPhone, one quarter it may be Mac and iPad, one quarter it may be services, one quarter it may be wearables, etc. While multiple of these lines of business can contribute to varying degrees, what has to be appreciated is all the revenue levers Apple has, which continually contribute to top-line growth.
While I appreciate the debate around how much more the iPhone can grow, being stuck on that point misses the fact that the rest of the Apple product and service ecosystem is underpenetrated compared to the iPhone. Meaning that not every iPhone owner has a Mac, iPad, Apple Watch, AirPods, or HomePod. Not every iPhone owner subscribes to Apple’s digital services. In that light, the growth story looks a lot more interesting and sustainable.
While the iPhone will continue to see a cadence of upgrades, generally over a four-year time frame, and be stable on an annual basis of shipments, Apple keeps sharing their active iPhone installed base keeps growing. This means they are continually adding new customers, even if relatively slowly compared to other products.
To give out some numbers for context, the iPhone global installed base is ~1 billion. Mac is ~110m. iPad is ~250m. Apple Watch is ~100m, and Airpods likely in that range as well. So compared to the iPhone installed base, every other hardware category from Apple is owned by less than 30% of their global customer base.
I’ve seen several global surveys on iPhone customers, which indicate that no single service has more than 30% penetration into the global iPhone customer base. iCloud appears to be the most subscribed to service at around 30% of iPhone customers subscribing to the storage service.
Now to the important takeaway in the Apple growth narrative. ~70% of Apple customers don’t own an additional piece of Apple hardware or subscribe to Apple’s digital services. While I do not expect 100% of all iPhone owners to own multiple pieces of Apple hardware or subscribe to all of Apple’s services, there is still a great deal of growth headroom for every offering from Apple beyond iPhone.
Once Apple acquires customers, they generally do not leave. Apple has loyalty, engagement, and brand equity on its side, and all those things factor into the continued reliance and strength in its business. Apple has a history of putting out great products, and as long as they continue to put out great products, there is no reason to be surprised by their performance in the marketplace.
This week’s Techpinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell analyzing the news from Apple’s Spring product launch event, discussing the latest quarterly earnings results from Netflix and Intel, and chatting about a new Microsoft study on work habit challenges from the pandemic. Note that this will be the last Techpinions podcast that I host as I am transitioning to my own Everything Technology podcast.
An article posted by Nilay Patel on the Verge over the weekend seemed to get a fair bit of attention. I found it fascinating the industry buzz generated over Netbooks and why so many in the media seemed to be so excited about and fawned over these devices. Perhaps it was simply because they represented a shift away from the normal, boring, clamshell designs of notebooks relatively unchanged for years. But the reality was no one making these devices or participating in the ecosystem wanted this category to happen.
I mention this brief anecdote on Netbooks because there has been an ongoing debate about what Netbooks meant leading up to the iPad launch. If you recall, Steve Jobs mentioned Netbooks during the iPad launch and famously remarked that their problem was “they aren’t good at anything.” Jobs was spot on, but the Netbook brought broad enlightenment to the tech industry that traditional PCs were too complicated, but most consumers did not do much with their laptops and desktops. I was always personally hesitant to side with the debate that consumers didn’t care to do more interesting and computationally complex tasks. This is why I always found the iPad so interesting as a computing device.
iPad vs. Mac
As I watched yesterday’s event from Apple and looked at post-event commentary, it seems a popular angle was noting the line between the iPad and Mac was becoming more blurry now that iPad uses an M1. Adding an M1 to iPad was a big question mark for me, and one I was not convinced Apple would do to be honest. My gut wanted to keep some cleaner lines between Mac and iPad, and the processor was a good way to do that. It is not like the iPad running an Ax processor was underpowered. But upon further reflection, the addition of the M1 in iPad Pro makes me even more bullish on iPad.
When I think about drawing the lines between iPad and Mac, I come back to something I’ve said about the iPad for quite some time. The strength of the iPad is its versatility. A popular framing inside Apple for iPad is that it is a magic piece of glass. That magic allows it to be just about anything. If we refine the way to think about iPad, it is the most versatile portable computer Apple makes. This is the message I think Apple needs to lean into so consumers who value both versatility and portability can clearly gravitate toward iPad if they happen to be on the fence between a Mac and an iPad.
Another interesting element of iPad running an M1 is how it can leverage the growing Mac base of software optimize and created for M1 Macs. I agree with some of the commentaries I saw yesterday where people thought the M1 was overkill for the iPad unless the software ecosystem built up to take advantage of all the performance offered in the M1. But I think that software is filled by the many developers optimizing and creating new software for M1 Macs, which should translate nicely to iPad. If anything, this helps create a much cleaner line of separation between iPad Pro and iPad Air and iPad. At the top of end, Apple’s Pro computing lineup is Macs and iPad Pro.
I led off this note by talking about the Netbook and how some people draw comparisons the Netbook to the iPad and like to argue the iPad can’t replace your PC. That debate has been dead for a long-time, but some still like to remain stubborn. Adding the M1 to iPad should end the debate permanently and is absolutely a viable option if someone is looking to replace a laptop.
The Merging of iOS and macOS?
Another question that keeps coming up is if/when Apple will merge iOS and macOS. Something that each year feels like it’s taking baby steps in that direction. It does seem like this will happen someday as it has a lot of benefits for developers. What may end up happening is Apple develops an entirely new framework around app development that can more adequately adapt to the range of the devices they will make. iOS was built for a mobile world, and macOS was built for a stationary world of computing. Apple has not yet built a unified operating system from the ground up for all the categories they are in, including wearables, and at some point, AR/VR. What Apple seems to be building at the moment with their operating systems feels more like bridges than brand new continents. Perhaps an entirely new, more encompassing operating system is coming.
The more I think about this point, which I will leave for food for thought, is if the future of Apple’s computing devices is the M1, and eventually, they bring that architecture to iPhone and beyond? The M1 has a fundamentally different architecture than their A-series chips and one that is a bit more capable of scaling the clock frequency up or down depending on the device. If a more grand unification of platforms is on the horizon, I think it is logical not to have M and A chips but a highly flexible and scalable architecture capable of incredible computing power. Right now, that seems to be the design of the M1, so M1 iPhones may not be too far off?
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news from Nvidia’s GTC conference, chatting about the release of Microsoft’s Surface Laptop 4 and other accessories, and previewing the upcoming product launch events from Samsung and Apple.
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the recent announcement of Verizon’s deal with AWS to combine private 5G with AWS Outpost private cloud for a new type of edge computing, chatting on Intel’s new 3rd generation Xeon Scalable server CPUs and their impact on 5G infrastructure, discussing changes in the smartphone market with the exit of LG and the debut of Samsung’s low-cost A series, and analyzing T-Mobile’s 5G for All announcements on free 5G phone upgrades and the debut of their 5G Home Internet service.
This article is exclusively for subscribers to the Think.Tank.
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the announcement of Arm’s latest generation v9 chip architectures unveiled at their Vision Day event and analyzing the news from Cisco’s CiscoLive event, including their latest efforts on Webex and hybrid work.
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing Intel’s new IDM 2.0 manufacturing strategy, discussing Microsoft’s recent Remote Trend Index study on hybrid work expectations and chatting about a new Samsung chip co-designed with Marvell that’s designed to improve the power efficiency of 5G network infrastructure equipment.
Yesterday, Intel announced a series of efforts that may one day be looked back at as foundational for their turnaround. Despite Intel being in a strong market position in many of the areas they compete in, the leadership zeal Intel once brought to the technology industry had been waning, and when it came to looking for points of innovation in computing, Intel is not the first company to come to mind.
In many ways, Intel is in a position very similar to Microsoft before Satya Nadella took over. Microsoft, like Intel, remained dominant in many markets. But the issue was a staleness surrounding the company that was leading to fading relevance as an industry and category leader and technology innovator. Nearly overnight, Satya Nadella brought renewed vigor to Microsoft, and Pat Gelsinger appears to be having the same impact on Intel. With the right leader, no company can ever be ruled out. Microsoft proved this, and Intel may very well prove this with Pat Gelsinger at the helm.
Intel’s Renewed Foundry Efforts
One of the main announcements is Intel’s renewed efforts at being a competitive foundry making semiconductors for companies other than Intel. Some may remember, Intel has tried this before, but those efforts failed for a variety of reasons. The main one being the priority for Intel was always their own products and not others. So what has changed this time around?
The first is a very different market than when Intel first attempted being a foundry. There has never been more demand for semiconductors than right now, and that demand will remain and grow stronger for years to come. Amidst significant semiconductor demand, leading to chip shortages for every segment, Intel remains one of the only semiconductor manufacturers with the capacity to offer immediately.
Another significant difference is Pat Gelsinger is a different leader, with different ambitions and goals than those who led Intel’s previous foundry efforts. Intel Foundry Services is a stand-alone business unit, which reports directly to Pat Gelsinger. Intel Foundry Services will have its own PNL, revenue goals, and growth targets. This is a much different model than what Intel had set up in their prior foundry efforts.
Lastly, Intel Foundry Services will have one of the more robust I.P. portfolios for foundry customers. This statement from the press release is succinct “IFS will be differentiated from other foundry offerings with a combination of leading-edge process technology and packaging, committed capacity in the U.S. and Europe, and a world-class I.P. portfolio for customers, including x86 cores as well as ARM and RISC-V ecosystem I.P.s.” Two things about this are unique to Intel foundry customers.
The first one being x86 cores, which assume some x86 I.P. At the moment, AMD is the only other company offering a cooperative effort to build custom x86 products for customers in their semi-custom business. While it is unclear how deep down the customization efforts would extent for Intel x86 I.P., but it will be extremely interesting to see how foundry customers take advantage of x86 I.P.
The other element of interest to me is Intel’s packaging technology, particularly their 3D packaging technology, which is quite innovative and unique. Foundry customers can take advantage of this unique technology which could turn out to be a significant differentiator for Intel Foundries, as well as Intel’s own products going forward.
While there is no slam dunk for Intel Foundry Services, I remain conservatively optimistic about their efforts. As I’ve outlined before, Intel being competitive as a foundry is extremely important for U.S. technology companies, so I personally believe it is in the best interests of many that Intel succeeds in this area.
Catching up in Process Technology
A fundamental part of Intel’s turnaround efforts is to close the gap between their foundries and TSMC and Samsung in process nodes. Intel will roll quickly past 10nm, something they should have done years ago, and look on track to deliver 7nm in a timely fashion and then quickly move to 5nm on a regular cadence going forward. Part of the reason I have more confidence Intel can do make these process node advancements going forward is due to their embracing of EUV and their less aggressive density targets. Both of these recipes are what TSMC has used to remain a steady cadence, and there is no reason to believe Intel can’t do the same.
Assuming Intel can close the gap, even if not catch up, there is reason to believe they will remain extremely competitive even if they are not making products on the same nodes as TSMC and Samsung. A lot of that has to do with Intel’s packaging technology which Gelsinger argues is the best packaging technology in the world, and there is some truth to this claim.
Intel is not actually as far off as its competition when it comes to performance. They are a bit farther when it comes to performance-per-watt but not in sheer performance. This is a testament to their architecture, and as we will see with future competitive products is a testament to their packaging and transistor design.
Getting to the point of at least some parity on process technology is essential for both Intel’s own products manufactured by Intel and customers of Intel Foundry Services. The next few years will be critical at an execution level for Intel to deliver on this front and for us to gauge their success at meeting regular cadence schedules.
I’ve always been curious to see what it would look like for other companies to collaborate with Intel to have their architecture designs run on Intel’s advanced packaging technologies. Hopefully, soon we may finally see if Intel process and package technology is truly that differentiated. I’d also love to see what Intel packaging technology looks like designed on Arm (hint Apple).
Outsourcing Intel Chips
As interested as I’ve been to see what other semiconductor companies can do with Intel’s packaging technology, I’m also quite interested to see what Intel architecture looks like running on other company’s process technology. We have been watching to see if Intel would outsource its CPU designs on a different process, and it looks like we may see that reality.
As a part of Intel’s announcements, it became clear that they would now start outsourcing some CPU designs to TSMC and possibly Samsung in the future. While this may be a trial run at first, I’m extremely interested to see how Intel CPU designs perform on a leading-edge TSMC process. Again, this will bring quite a bit of clarity to the quality of Intel’s architecture and potentially their process as well.
The other area of intrigue this creates is the competitive dynamic between Intel Foundry Services and other foundries. If Intel (who is Intel Foundry Services’ largest customer) sees significant benefits from their products made at TSMC, then it creates a competitive dynamic that did not exist before. Intel should be building its products on the best technologies, and if that becomes clear it is not Intel Foundry Services, then IFS will be truly competing for Intel’s business. A fascinating storyline to watch develop.
What is also new to this equation, and one that will be interesting to watch is the dynamic of Intel now competing with any outside foundry they make their products with. Intel may be partnering with TSMC, but they will also be trying to compete with them, directly, with Intel Foundry Services. This, in my opinion, is the most awkward part of the overall new structure and announcement. We will watch this new dynamic closely.
Overall: I am significantly more bullish on Intel now than I was a year ago. Pat Gelsinger is the right guy at the right time, and as I mentioned with the observation on Microsoft, sometimes all it takes is the right leader. These announcements are a huge step in the right direction, but as Gelsinger continued to emphasize during his presentation, it will all boil down to execution. That being said, now more than any time in the last decade, I’m more optimistic Intel can execute going forward.
This article is exclusively for subscribers to the Think.Tank.
This week’s Techpinions podcast features Ben Bajarin, Bob O’Donnell and special guest Steve Baker of NPD looking at the results of tech-related products sales trends in categories ranging from TVs and PCs to WiFi routers and monitors, throughout 2020 and into 2021, and what that says about the future of the tech industry.
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the 10th birthday of Google’s ChromeOS and Chromebooks, chatting about an interview with the leader of Qualcomm’s Licensing Division, research on how work from home impacts women, a deal between automotive supplier Bosch and chipmaker GlobalFoundries, and analyzing the investor analyst days from Verizon, T-Mobile and AT&T and what they say about the future of 5G in the US.
I want to add a perspective to the idea of augmented reality (AR) that I don’t hear discussed much. When most people, who are familiar with augmented reality, talk about AR, the concept is rooted in overlaying the digital world with the physical. Common use cases surrounding the AR vision include things like getting directions and having more details revealed to you on your route. Shopping and having lists, product information, and more revealed to help you shop more efficiently. Communications where your emails, or texts, or contact information can show up right before your eyes. I could go on, but there is a fundamental point lost in these use cases that I feel will shape the category.
What is Augmentation?
First, we need to understand what the foundation will be for augmented reality. We can start with recognizing the definition of augmenting. Augment, in its verb form, means “to make (something) greater by adding to it.” If we start with this simple understanding of what augment means, we can then have a more productive conversation about the future of augmented reality. Just looking at the word, augmented reality, in definition, would mean to make reality greater by adding to it. The challenge I’ve run across with several vision pieces for augmented reality is how the use cases discussed make reality worse, not better.
I think this conversation about the future of computing and augmented computing needs to center on at this moment in time is what exactly we are trying to augment. More specifically, are we trying to augment (make it better by adding to it) the computer or the human? I would argue that if we flip our thinking from augmenting the computer, which is what most people think, to augmenting the human, we will get closer to a reality where computers make our lives better and not worse. And in that viewpoint, I’d argue that people will have far less tolerance for the inefficiencies most of our smartphones and PC experiences encompass.
Augmenting the Human
In case it isn’t clear, I have personally landed on the side of augmenting the human and not the computer. I mean that whatever form a computing device focused on augmentation takes, its sole purpose should be to make the human’s capabilities better. For example, I believe health and fitness wearables are a form of augmented computing. If you use one of these devices and use the Apple Watch, you realize it increases my health potential in various ways. For me, Apple Watch increases my fitness and overall health by monitoring many vitals (with more to come in future versions) and bringing a computational element to health that is not achievable without it. By definition, Apple Watch is augmenting my health.
Another area of great interest to me is devices we have in our ears. From my perspective, a simple hearing aid gives us a basis for augmented hearing. Hearing aids enhance hearing for those who need them, but there are applications for this for everyone. For the last six months, I’ve been trying the IQBuds Max from Nuheara and it has been truly enlightening for me under the premise of augmented hearing.
These earbuds are not just noise-canceling music devices. Their true value is the sound-enhancing features, particularly around voices. They have an active microphone that listens to outside sounds and can eliminate white noise and isolate sound to hear everything better, even if you have good hearing in general. The software includes different modes that can isolate sounds in certain directions as well as in real-time.
My experience with the Nuheara IQBuds Max has shown me a vision for what an ear-worn computer could do in augmenting my hearing. It is not just helping me hear noises or conversations better, but also digitally managing the sound levels, helping me isolate sounds that are softer by boosting them or softening sounds that are too loud all in real-time. These, not just active noise canceling but active noise listening earbuds, have shaped my perspective on augmented computing more than any other product I’ve tried up to this point. They are the definition of augmenting, to make better, my hearing by using a microcomputer with AI that sits in my ears.
Lastly, let’s talk about augmented vision. If we use my basis that the primary way to think about augmented computing is computers than enhance the human’s capabilities, then vision becomes quite clear, literally. Yes, a computer I wear over my eyes will bring elements of our experiences with computers today. Still, as I said, our tolerance for inefficiencies will go up dramatically when it comes to our eyes. I’ve come up with a saying that I’m fond of is while the wrist and ears are prime real estate for computing devices, the eyes are sacred ground.
In 2019, I did a project for a company working on an augmented reality product. We brought consumers in and tested a wide range of current AR solutions. From this research, it became abundantly clear to me the eyes will be an extremely difficult place to put computers. Rather, the eyes will be an extremely difficult place to bring the current smartphone or PC-centric thinking of computing. The core challenge is nearly every use case that exists today distracts from our vision instead of enhancing it. Showing me a text message gets in the way of my vision. Even how directions are shown will need to be re-imagined as not to obstruct or getting in the way of what I see in the world. This is why I’m convinced this will be the most difficult place to bring a computer over my eyes and why we are still a long way off from any true mainstream smart glasses solution.
Whatever ends up becoming mainstream, that is, a computer for my eyes will have to, at its basic level, augment my vision in useful ways. Sight is precious, and humans know this, so enhancing our sight will be one of the most useful features of smart glasses. Letting me see farther, closer, in the dark or low-light, are all core augmented vision use cases. How the computational/digital world overlays will need to recognize sight is precious, less is more, and focus on truly adding to the visual experience, not detracting from it.
Ultimately, augmented computing by way of devices we wear, that give us “superhuman” capabilities, could be the purest manifestation of Steve Jobs’s perspective that computers are bicycles for our mind. The observation that Jobs makes is that what humans create is added to our efficiency as a species by way of computers. Augmented computing will enhance our capabilities as a species and take this concept of computers, making us more efficient in all we do to a new level.
It’s a great vision, but also one that is extremely difficult and still very far off.
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news from Microsoft’s Ignite event in areas such as their Azure Percept IoT hardware and Microsoft Mesh mixed reality platform, discussing the release of Qualcomm’s Snapdragon Sound technologies, and chatting about the multiple new business-focused 5G offerings from T-Mobile, including their Home Office Internet fixed wireless product.
A lot is going on in the world of advertising at the moment. Actually, I should clarify a difference between advertising (which, when done right, also contains a heavy dose of branding) and simply trying to sell a product. When it comes to the Internet and the many free ad-subsidized services that exist, anyone paying attention would agree the clickbait articles, dozens of trackers, websites filled with malicious code, and more have gotten out of control.
I understand this world all too well. From 2008-2010 I was very close to one of the largest tech blogs on the Internet. I spent a great deal of time working with them on their business model, growth strategy, and revenue growth strategy. The product itself was free but was subsidized with ads. During negotiations with our ad-placement agency, I was under the constant pressure of their demands of how they wanted ads to show up. The disconnect, which I found nearly impossible to educate them with, was that you could place all the ads you want in the world all over an article, but if it hurts the customer experience and lessons engagement, there is no ROI. This was where the internal data we had via reader (I still called them customers) analytics and ad-engagement were clear.
The battle I was up against was the belief that with enough ads thrown at a person, via pop-ups, video, in-line articles, etc., that you could inundate them to the point that they simply could not ignore it. This is where tracking came in, and I saw many internal ad-deck pitches that demonstrated volume across websites and some shady statistics about how that helps consumer awareness. But my gut was always that this would create a more negative sentiment to that product and brand than a positive one. What is happening with ad-tracking and following today is the equivalent of a traveling salesman following you around everywhere you go screaming “buy this product” at the top of his lungs. Such a thing would turn people off to the product, even if it is decent, simply because of the tactics of the salesman.
From years of studying this space and having spent time in it, I’m convinced of several things. The first one is that consumers genuinely enjoy discovering new products and brands that enrich their life. The second is that excess ad-placement, and tracking is a turn-off and creates a look of a desperate product or brand, then one confident to go up against the competition.
Google and Facebook are both central to this debate. Both have proven effective, given current means of ad-targeting. Both have lowered the customer acquisition costs for brands and product companies, but again at the cost of feeling creepy, or at the least overly aggressive. This is why Google’s announcement of phasing out third-party tracking cookies and their privacy sandbox is of interest.
From reading through the blog-post, on the surface, it appears Google is simply distilling the information gathered on a person to the bare minimum to put them into a defined cohort of interests. In some ways, this is how Google functioned in its early days and historically the analog age’s advertising industry. If I’m a reader of a magazine on skateboarding, or tennis, or cars, etc., then it is safe to assume I’m interested in said topic and products within the category.
This is why I’ve always believed niche content is the best bang for the buck for any ad or product spend. I’d bet good money those ads will perform better any day over Facebook and Google if done right and placed relevantly. Niche content breeds more engagement, and more engagement creates higher Ad ROIs.
Google remains in the best position here, with the exception of an audio/podcast platform which I believe are excellent ad mechanisms given the integrated flow of ad read and placement. For Google, YouTube is this mechanism, and I’ve noticed a lot more content creates on YouTube start to integrate ads cleverly into their content. Anecdotally, YouTube is becoming one of the driving forces behind my purchasing behavior as I find authentic product reviews on YouTube the most helpful source to influence my purchases.
While Google remains well-positioned, I still question Facebook here and its ability to adapt. I’m even less optimistic of Facebook proper where I can see Instagram in a better position to adapt. The Facebook app/website itself may very well end up being like Yahoo. Not dying but fading more and more into irrelevance. Facebook’s likely inability to develop new assets or acquire new companies doing something innovative in a social media adjacency is going to hurt the companies overall ability to compete, in my opinion.
The model Facebook has appealed to is the Web 2.0 way of advertising but not the web/digital world 3.0. Oculus may be the one bright light for the company, and if they can make the Oculus experience mainstream, there is more upside. But this is still a question mark if Facebook is the company that will mainstream VR and AR.
As I mentioned, advertising can not get in the way of the customer experience. This is an evolving landscape that will need to find a sweet spot in helping consumers discover new brands and products that enrich their lives without stalking them, harassing them, and protecting their privacy. That will be a cornerstone for success for any company looking to subsidize their service with ads, and that is still a very different world than we live in now.
On day one of the winter edition of Ignite, Microsoft announced the launch of Microsoft Mesh, a new collaboration platform powered by Azure, that allows people to have a shared virtual experience on a variety of devices from Mixed Reality (MR) and Virtual Reality (VR) headsets to PCs and Macs and phones. In his opening remarks, CEO Satya Nadella compared Microsoft Mesh to Xbox Live’s launch in 2002. The service made online multiplayer gaming for consoles mainstream by making it easier for developers to connect their games to the internet. The launch’s result was a rapid growth in online multiplayer titles for the Xbox and Xbox 360, giving Microsoft an advantage over Sony and Nintendo for years. It will be interesting to see how Mesh will change the approach taken by companies like Spatial, a brand that many already associate with virtual collaboration. In my mind, a common platform is the only way to achieve a level of collaboration that can be genuinely inclusive and natural. Yet, I realize all too well that business models sometimes get in the way, and it is much easier for Microsoft to focus on a common platform when monetization comes not from the platform itself but the cloud that powers it.
Microsoft’s Alex Kipman, the mind behind Kinect and HoloLens, spent an hour on the keynote stage talking about the opportunities this platform opens up as several guests from the science and entertainment business, all appearing in holographic form, bore witness to his belief that the future is paved with potential for shared virtual experiences.
I had the opportunity to experience the keynote, not through my PC, as I have been accustomed to for the past year, but as an avatar by the edge of the stage where Kipman’s hologram stood. While Kipman looked like himself, my fellow participants and I appeared as much more basic AlspaceVR avatars. The fact that I could not tell who was in the audience, even if I was aware that there were other fellow analysts and reporters I knew, did not make the experience less engaging. I could see people moving around, using emojis to react to the presentation, and even being annoying when they teleported themselves too close for comfort.
It is not a secret that I am not a great fan of VR. I usually find the effort I put into setup and the experience always to outweigh the perceived value I get from it. So I was surprised to find the keynote experience quite engaging. While there was still a gimmicky side to it, like having a whale shark circling over my head, it clearly gave me an idea of what events like Ignite could be like in the future. More importantly, though, it showed me what collaboration might be like. Maybe it is because I have not been in a room with as many people in over a year, but the experience did feel more personal than watching it on a computer screen. Having experienced HoloLens as well, I can certainly say I prefer the holographic experience in the room I am in, especially when it comes to collaboration. When working with someone, the experience is created more so by the interaction you have than the environment. This is why we have been struggling so much with remote work during the pandemic.
One of the aspects of workflows and collaboration I have been highlighting over the past year is how heterogeneous the set of applications and operating systems we work with every day really is. Whether you are a Microsoft Office or a Google Workspace user, you are most likely jumping between the two environments and using apps like Zoom or Slack on top, even if both productivity suites offer chat and video solutions. This might be because of personal preference or because of the people you work with. Either way, it is rare to be all in with just one solution. You might be able to do it within your organization but not when you work with external people.
Now think about a real-life meeting. While our work might be on a PC, a Mac, or a phone, shared across several apps, what we bring to the meeting is, first and foremost, us. Now think about how not having a common platform would limit that experience. Mesh offers developers a full suite of AI-powered tools for avatars, session management, spatial rendering, synchronization across multiple users, and “holoportation” to build collaborative solutions in mixed reality. More importantly, however, Mesh allows people to meet others where they are. The ability to benefit from this future even without a top-of-the-line device like HoloLens means that, hopefully, we will all have a seat at the table. Mesh also guarantees consistency in the way I show up to my meetings. I am me, and in real life, I show up in the same way. This is, of course, critical if you want to create a realistic experience, and right now, it is not possible. The way I showed up at a Spatial meeting a few weeks ago was very different from how I materialized at the Ignite keynote. It goes without saying that these inconsistencies prevent you from creating a genuine connection with the people you interact with.
The proximity with our office co-worker will make collaboration easier, but at a societal level as well. I loved what filmmaker James Cameron (Avatar, the movie) said about MR driving more empathy because we can share more with someone. Again, think about the past twelve months and how being in your colleagues’ home office, kitchen, or living room helped us increase our empathy, and that was simply through a screen.
Satya Nadella wrapped up his opening remarks with one of his favorite lines used to describe the possibilities for HoloLens: “When you change the way you see the world, you change the world you see.” It will certainly be fascinating to see what people build on Microsoft Mesh. Still, there is no doubt in my mind that building a platform that brings different devices together is an excellent example of a growth mindset; something Nadella has instilled throughout the company.
This week’s Techpinions podcast features Mark Lowenstein and Bob O’Donnell discussing a host of 5G-related news items from the past week including the results of the critical C-Band auctions for mid-band 5G radio spectrum, the announcement of T-Mobile’s new unlimited Magenta Max 5G data plan and what it implies, and the debut of HPE’s new telco-focused division and its first offerings, as well as the new partnership for 5G infrastructure between Intel and Google.
One of the biggest issues facing the tech industry right now is significant delays and a backlog of semiconductor foundries. Almost every tech category has seen a boost to demand, and that led to a dynamic of significant demand and not enough supply of semiconductors.
While an unanticipated surge of demand is a chief cause of the chip shortage, China’s initial shutdown this time last year due to the pandemic was going to cause delays for the entire year regardless of an uptick of demand. The surge in demand exaggerated this problem even more, which now brings an important observation to bear.
When I wrote last week about Apple perhaps partnering with Intel, I was only scratching the surface of what should be on every tech companies mind if their products rely on semiconductors in some way. This shortage is brought about because of the lack of foundry options for semiconductor companies. TSMC has had the clear lead for several years, and if you want a product on leading-edge process, your only option was TSMC. Samsung kept pace, but they hit some snags with their 10 and 8nm product which led some of their customers to go to TSMC.
While TSMC is not a semiconductor foundry monopoly yet, we are seeing a glimpse of what the world may look like if TSMC either is the last foundry standing or, at the very least, having a multi-year advantage on leading-edge process technology.
In either scenario, competition is painfully impacted. If only a few of the biggest tech companies, with scale and money, have access to leading-edge process technology and transistor designs, then those companies will maintain an edge on everyone else because they will be the few who can actually secure inventory. Everyone else will have to wait to get their chips to market. This is not a good scenario.
While TSMC is investing in foundries in the US, right now in Arizona, it will take several years to build out and be ready to start producing wafers. But the dynamic of TSMC having a monopoly on the leading-edge process, at a minimum, is concerning since they control who can get supply AND they could control pricing. This is why it is of the utmost importance that foundry competition is established and remain.
Yesterday Joe Biden signed an executive order to investigate the issues surrounding the chip shortage and look to strengthen the supply chain. Many hope for a renewed focus and more investment around the Chips Act and actively look to make US-based foundries more competitive.
Semiconductor foundries are not startup opportunities. Therefore, the US government’s options are Intel and Global Foundries to support US-based semiconductor manufacturing. While I do hope there is more around the Chips Act that can be established, I do not have the most faith in the government to help solve this. Which is a key reason I mentioned Apple, as I feel it makes more sense for private enterprise to help Intel via joint ventures or supply commitments.
The Art of Dual Sourcing Foundries
Given the reality, both short and long-term is the industry will only have a few viable foundries supporting the growing demand and need for semiconductors, companies who can strategically execute a dual-source strategy will be well-positioned.
This was always something Broadcom did well. I recall many conversations with their executives who were proud of the fact their chip design libraries were portable, and they could make them at whatever foundry they saw fit. Qualcomm is similarly executing a dual-source foundry strategy as they have versions of the same chipsets made at both TSMC and Samsung.
Companies that dual-source will be extremely well-positioned to weather a number of different storms that could come their way. From the geopolitical that I have outlined before, national economic issues, global catastrophes, etc. While this isn’t discussed as much publicly for obvious reasons, it is top of mind for many executives in the supply chain and those whose companies make products via semiconductor foundries.
While I keep circling the wagon on this, the fundamental point that needs to be addressed in the long-term is how to keep foundry competition alive. I believe Intel is critical to that future, which is why the industry needs to be concerned with Intel’s future whether they buy chips from Intel or not.
There’s been a lot of excitement around the trajectory of PC sales in 2020. The pandemic clearly reinvigorated demand for PCs, and sales could have been even greater had the market not faced component shortages. Working from home, learning from home, and fighting boredom at home drove upgrades as well as new sales, expanding the overall userbase. Sales reached volumes we had not seen for almost 20 years.
Many were eager to point out that the rise of smartphones at PCs’ expense was rebalanced as consumers and enterprise users alike rediscovered the PC. I think a more realistic read of what 2020 brought to the PC market is that more time was spent on PCs at home than ever before. And this was rooted in the fact that everything was done more at home than ever before, and with that, more was also done digitally. If you’re looking at the time that a typical knowledge worker would have spent on the PC at the office, it probably didn’t change much. It just so happened those working hours on a PC were taking place at home. One trend that did change with everybody transitioning to online life was that with more time spent doing things online, sharing devices became much more difficult. This meant that, on average, household penetration grew, with many seeing a one on one PC to human ratio.
As we look forward, of course, the big question that PC vendors are asking is what happens to this base when we return to a more predictable life pattern that involves activities outside the home. I purposely don’t want to call it the new normal or back to normal, mostly because I hope that what we will go forward rather than back both in working and school embracing a richer, more equitable digital transformation. Everybody is trying to predict how this new base is going to behave going forward. Yet, for the chip vendors, Microsoft, and every PC brand in the market, the focus should be on keeping these users engaged with that PC they might have bought in 2020. Only continued engagement will move that 2020 sale to a 2025 upgrade. Focusing on an upgrade is certainly better for the industry than focusing on a sale.
The state of the consumer PC market pre-pandemic was characterized by a large number of users that had what I would call “an emergency PC”. Most of their computing needs were taken care of by smartphones mostly, but they would use a PC for those tasks that required a larger screen, a keyboard, and maybe some applications that just did not run on mobile. That emergency PC did not drive any emotional attachment or a strong need for an upgrade. Even when a user would consider an upgrade, the budget they would allocate was limited because the PC’s value was seen as limited, except for gaming. 2020 changed that. 2020 emphasizes quality computing experiences, from video calling to connectivity to brighter and larger screens. Consumers realized the need for a better PC experience, and with that realization came the willingness to invest more in their purchase. This is great news for the industry, regardless of where overall sales end up as average selling prices had been falling outside of the premium segment for quite some time.
So what now?
If most of us transition back to a more smartphone-first computing experience, where does that leave the brand-new PC we bought over the past year? Will PC life cycles return to a pre-pandemic average, or will they shorten? I would argue that a few things have changed in favor of shorter life cycles and continued engagement on a PC. I would also argue that every vendor in the PC ecosystem has his work cut out to continue to show the value by focusing on strengthening the app ecosystem, continuing to drive designs that highlight the overarching experience rather than individual features, and finally showcasing what can be done with these new PCs.
What are the factors that I think play in the PC favor?
First of all, that our digital life has grown. Whether you’re thinking about gaming, or entertainment, or online shopping and telehealth, we have been doing a lot more through a screen, and some of these experiences have been better or more convenient than doing them in person. While some of this time will return in person, I firmly believe the overall time spent online will remain higher than before the pandemic and the kind of activities we will be done using a PC will be more engaging than in the past.
When it comes to business, there are two factors positively impacting demand. First is the recent awakening of many organizations on the importance of driving employee engagement and satisfaction through the tools, hardware, and software provided to get the job done. While I do not expect every employee to benefit from this newfound awareness, I anticipate knowledge workers and first-line workers will. The second factor is the continued increase in security threats and the high risk associated with that. This, coupled with a higher number of remote workers, will get organizations to broaden their portfolio of enterprise liable devices to adequately cater to their users.
Aside from returning to increased mobility, what plays against PC demand are two main factors: continued supply chain constraint and limited budgets. The supply chain will likely return to normal by the second half of the year, vaccine rollouts and Covid variants permitting. Spend will vary depending on the markets. There is more clarity in mature markets, where especially on the commercial side, organizations might shift some spent from other budgets such as travel to ensure every employee is taken care of. What is certain is that the pandemic caught many businesses unprepared, and this is not something they want to repeat. On the consumer market, it might be time for some out-of-the-box thinking when it comes to financing and other incentives that work so well in the smartphone market.
Some bullish forecasts see 2021 PC volumes above 350 million units. While that might be possible from a production perspective, I would expect some inventory replenishment will occur, leaving sell out to be flat to only slightly up 2020 volumes.
This article is exclusively for subscribers to the Think.Tank.
When Apple introduced the Apple Watch, they initially positioned it as jewelry that also told time and a few health features.
However, over the last three years, Apple has added many health and fitness features to the Apple Watch, most recently adding ECG and Blood Oxygen monitoring.
Since Apple introduced the Apple Watch and made health monitoring a reason for it to exist, I have wanted two other distinct health features.
The first is related to blood sugar readings for people with diabetes. I have been a diabetic for over 25 years, and at least three times I day, I had to prick my finger to see what my blood sugar readings were and adjust my insulin dose accordingly.
About five years ago, I began using the Dexcom Continuous Glucose monitor to monitor my blood sugars electronically and do away with the pinpricks.
The Dexcom Glucose monitor consists of a sensor patch that I place on my stomach with two tiny prongs that get inserted into my belly that analyzes the interstitial fluids to read my blood sugars. That sensor is connected to a Bluetooth transmitter that then sends that reading to my iPhone, and through the Dexcom app on my Apple Watch, I can see what my blood sugar readings are 24/7.
Three years ago, Apple hinted that this type of blood sugar reading might be able to be done via some special light sensors in an Apple Watch someday, and I admit that I got excited about this prospect. Although my medical insurance covers 50% of the Dexcom product cost, I will pay about $1500 a year for my share of the Dexcom bill.
At CES, a Japanese startup, Quantum Operation, put a glucometer into a watch. Although this was a prototype if indeed Quantum Operation has solved how-to but a blood sugar sensor that uses light to get blood sugar readings, this would be a breakthrough.
Three years later, Apple still has not found a way to add this feature to the Apple watch, although I have seen recent reports that this feature could be in the new Apple Watch later this year.
Samsung is also planning to add blood sugar testing light sensors to their watch shortly.
This would be a promising development that suggests a light sensor-based blood sugar solution could be built into smartwatches in the future.
The second feature that I wanted in the Apple Watch has been for it to read my blood pressure. I had a triple bypass in 2012 and need to take my blood pressure daily. Some attempts to do this with smartwatch bands make the smartwatch bulky since it uses the band like a BP cuff to expand like traditional blood pressure readings you get in a dedicated blood pressure monitor.
At CES, Biospectal showed off its Biospectal OptiBP, which lets a person use a smartphone camera to measure blood pressure. This technology was introduced last fall, and Forbes did a great piece on this product launch of what I consider a highly important health monitoring technology.
If you have been to any doctor, you know that taking your blood pressure is one of the first things they do in any visit. This is because it can tell them a great deal about a person’s heart health at the center of many medical conditions.
This company used CES 2021 to highlight it. The company said a recent independent large-scale clinical study from Scientific Reports in Nature validated Biospectal’s OptiBP ability to measure blood pressure with the same degree of accuracy as the traditional blood pressure cuff. It uses the smartphone’s built-in optical camera lens to record and measure a user’s blood flow at the fingertip in half the time it takes with a traditional cuff (about 20 seconds). I could not find the minimum smartphone camera requirements to allow for this type of BP test. However, I suspect more recent smartphone cameras could be used for this. OptiBP’s proprietary algorithm and optical signal capture methods turn light information into blood pressure values by optically measuring blood flow through the skin.
The Biospectal OptiBP for Android app is in public beta and available now in the US, UK, France, Germany, Spain, and Switzerland. Biospectal OptiBP for the iOS app is planned for release later this year. The company said that interested participants could register for the public beta or sign up to be notified once Biospectal OptiBP becomes available in their country.
These are breakthrough developments that bode well for wearable health monitors and gives me hope that sometime soon, Apple, Samsung, and others may be able to add these two new health-monitoring features to eventual smartwatches and even fitness trackers.