This week’s Tech.pinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell discussing the Net Neutrality decision, Disney’s purchase of 20th Century Fox, Apple’s purchase of Shazam, and Microsoft’s new AI-related announcements.
After TechCrunch first broke the story last Friday, Apple confirmed on Monday the acquisition of UK based Shazam. Apple said:
“We are thrilled that Shazam and its talented team will be joining Apple. Since the launch of the App Store, Shazam has consistently ranked as one of the most popular apps for iOS. Today, it’s used by hundreds of millions of people around the world, across multiple platforms. Apple Music and Shazam are a natural fit, sharing a passion for music discovery and delivering great music experiences to our users. We have exciting plans in store, and we look forward to combining with Shazam upon approval of today’s agreement.”
Apple did not disclose the price but we have several sources that have confirmed to us that the deal is in the region of $400 million.
Yesterday, as expected, the Federal Communications Commission repealed 2015’s Open Internet Rules, also known as ‘net neutrality’. Much ink has been spilled (or keys tapped) on this issue, including my own Techpinions piece two weeks ago, arguing about the paradoxy of repealing Network Neutrality, while at the same time blocking AT&T’s acquisition of Time Warner.
I see credence on both sides of the Network Neutrality argument, and I tend to agree with Jon Leibowitz’s Wall Street Journal op-ed yesterday that the sky didn’t fall when Title II was imposed in 2015, nor will it now that it has been repealed.
So, as this continues to be litigated over the coming months, it might be a good time to think about the best protections from anticompetitive practices, while recognizing the rapid changes occurring in content, digital media, and communications. In this “Post Net Neutrality World Order”, I urge the major actors in the game—service providers, content providers, regulators—to adopt the following Code of Conduct.
Task 1: Read and adopt the words of AT&T’s Senior Vice President Bob Quinn, who pledged in a November blog post that “We Will not block websites, degrade internet traffic based on content, [or] unfairly discriminate in our treatment of Internet traffic”. This, in principle, is now in the remit of the Federal Trade Commission, which in the past has been fair and balanced on this issue.
Positive Practices Are More Permissible Than Negative Practices. It’s not that big a deal if AT&T zero-rates content DTV content for its wireless subscribers, or offers some attractive bundles. It’s more concerning if engage in a practice to slow down services for subscribers that are competitive with DTV. Similarly, on the B2B side of the equation, there’s a good case for ‘fast lanes’ in some instances. ‘Slow lanes’ will be harder to justify.
Refrain From Practices Clearly Impinge On the Idea of the Open Internet. Some of the biggest concerns have to do with the potential for service providers to take a “cable” approach to the Internet, such as charging for access to specific sites. It will take only one or two airline-esque practices like this to set us back.
Recognize that Wireless Is different than Fixed. I’ve long argued that the FCC should look at wireless through a different lens, when compared to broadband. Wireless services will forever be capacity constrained, even in a 5G world. That’s why ‘unlimited’ plans always come with an asterisk. There have been practices such as throttling and zero rating, where regulators, even in a Title II world, treaded lightly. New services such as LAA, and the concept of network slicing will introduce more opportunities to offer tiers of service.
There’s Nothing Wrong With Tiers of Service. It’s 2020, and Nintendo introduces a new, multi-player online virtual reality game requiring faster speeds and lower latencies. So they pay some sort of ‘fast lane’ surcharge to a service provider, some of which gets passed on to the consumer. I don’t see anything wrong with that. Even though more and more households might be able to get 1 GB services, they might not necessarily need Or need them all the time.
DOJ meet FCC, FCC Meet DOJ. We have to work toward a broader policy framework. As I argued in an earlier column, repealing network neutrality and blocking AT&T-Time Warner don’t seem to be coming from the same thought process (yes, I recognize that these are handled by different agencies). That said, we’ve seen more practices in the content business that have been detrimental to consumers—DISH standoffs with networks, Amazon-Google—than any violations of Open Internet rules.
Be Transparent. There are going to be situations where some of the practices that have been the focus of those in favor of greater regulation make sense, given business realities or this changed landscape. This is where the FTC could step in if the service providers are not more proactive themselves.
More broadly, the tectonic changes occurring in our communications, digital media, and content landscape beg for a broader strategic review of our policy framework. This is one of the reasons there’s been a call for Congress to legislate this, rather than have it be in the hands of the FCC, whose philosophy could change every four years. The 1996 Telecom Act seems increasingly outmoded, as it fails to properly account or adjust for, the emergence of wireless broadband (LTE, 5G), smartphones, the rise of OTT and streaming, and consolidation in the content landscape (Comcast-NBC Universal, AT&T-Time Warner, Disney-Fox). It seems like right now, we’re dealing with all of these changes on a deal-by-deal basis: impose NN and then repeal it; allow Comcast-NBC/Universal but block AT&T-Time Warner; allow Internet companies to do things that telecom companies can’t. In this giant Venn diagram of telecom and the Internet, you’ve got AT&T and Verizon owning important content assets, while Google and Facebook provide broadband services and OTT communications and messaging services.
The other thing this all points to is that we need more competition in broadband. Currently, only 50% of households have access to more than one decent broadband provider. A more competitive broadband market would more naturally prevent some of the practices we’re now trying to legislate our way out of. There’s the potential for some change here with the approach of 5G and fixed wireless. A 2020 Telecom Act might, for example, revisit the rules around network resale, which has led to more robust broadband competition in other countries.
Earlier this week, NVIDIA launched the Titan V graphics card at the NIPS (Neural Information Processing Systems) conference in Long Beach, to the surprise of many in the industry. Though it uses the same Volta architecture based GPU that has been shown and discussed and utilized in the Tesla V100 product line for servers, this marks the first time anything based on this GPU design has been directly available to the consumer.
Which consumer though, is an interesting distinction. With its $3000 price tag, NVIDIA positions the Titan V towards developers and engineers working in the machine learning fields, along with other compute workloads like ray tracing, artificial intelligence, and the oil/gas industry. With the ability to integrate a single graphics card into a workstation PC, small and growing businesses or entrepreneurs will be able to develop applications utilizing the power of the Volta architecture and then deploy them easily on cloud-based systems from Microsoft, Amazon, and others that offer NVIDIA GPU hardware.
Giving developers this opportunity at a significantly reduced price and barrier to entry helps NVIDIA cement its position as the leading provider of silicon and solutions for machine learning and neural net computing. NVIDIA often takes the top down approach to new hardware releases, first offering it at the highest cost to the most demanding customers in the enterprise field, then slowly trickling out additional options for those that are more budget conscience.
In previous years, the NVIDIA “Titan” brand has targeted a mixture of high-end enthusiast PC gamers and budget-minded developers and workstation users. The $2999 MSRP of the new Titan V moves it further into the professional space than the enthusiast one, but there are still some important lessons that we can garner about Volta, and any future GPU architecture from NVIDIA, with the Titan V.
I was recently able to get a hold of a Titan V card and run some gaming and compute applications on it to compare to the previous flagship Titan offerings from NVIDIA and the best AMD and its Radeon brand can offer with the Vega 64. The results show amazing performance in nearly all areas, but especially in the double precision workloads that make up the most complex GPU compute work being done today.
It appears that gamers might have a lot to look forward to with the Volta-based consumer GPU that we should see arriving in 2018. The Titan V is running at moderate clock speeds and with unoptimized gaming drivers but was still able offer performance that was 20% faster than the Titan Xp, the previous king-of-the-hill card from NVIDIA. Even more impressive, the Titan V is often 70-80% faster than the best that AMD is putting out, running modern games at 4K resolution much faster than the Vega 64. Even more impressive, the GV100 GPU on the card is doing this while using significantly less power.
Obviously at $3000, the Titan V isn’t on the list of cards that even the most extreme gamer should consider, but if it is indicative of what to expect going into next year, NVIDIA will likely have another winner on its hands for the growing PC gaming community.
The Titan V is more impressive when we look at workloads like OpenCL-based compute, financial analysis, and scientific processing. In key benchmarks like N-body simulation and matrix multiplies, the NVIDIA Titan V is 5.5x faster than the AMD Radeon RX Vega 64.
Common OpenCL based rendering applications use a hybrid of compute capabilities, but the Titan V is often 2x faster than the Vega graphics solutions.
Not every workload utilizes double precision computing, and those show more modest, but noticeable improvements with the Volta GPU. AMD’s architecture is quite capable in these spaces, offering great performance for the cost.
In general, the NVIDIA Titan V proves that the beat marches on for the graphics giant, as it continues to offer solutions and services that every other company is attempting to catch up to. AMD is moving forward with the Instinct brand for enterprise GPU computing and Intel is getting into the battle with its purchase of Nervana and hiring of GPU designer Raja Koduri last month. 2018 looks like it should be another banner year for the move into machine learning, and I expect its impact on the computing landscape to continue to expand, with NVIDIA leading for the foreseeable future.
As my daughters were both in Jr. High, and now one in High School, I’ve been increasingly convinced the future of the workplace is happening before my eyes. This is a workplace very different from the one many of us experience today. It truly is hard to appreciate how different the world will look from a technology standpoint without seeing how kids today use the digital tools they have at their disposal.
Very soon, Apple will make its 100th M&A acquisition since it acquired NeXT Computer in 1996. This acquisition was done by then Apple CEO Gil Amelio, and just before he made that acquisition, he asked me about my thoughts about buying this company from Steve Jobs. At the time, because I was helping Mr. Amelio with Apple’s mobile strategy, he and I talked weekly about his goal of reviving Apple. Long time Apple watchers will remember that Gil Amelio was on Apple’s board when the company had lost its way and was over $1 billion in the red. When the board ousted Apple CEO Michael Spindler in 1995, Mr. Amelio was asked by the board to become CEO and try and turn the company back to profitability.
When I first saw Hidden Figures, it got me thinking about workplace diversity. In the movie, Katherine Goble helped a group of white American men does something they could not do on their own–send a man into space to orbit the earth and eventually help the US space program become the first country to put a man on the moon. One could argue this group of white American men has eventually figured it out, but in that scenario how long would it have taken? Would Russia have beat the US to the moon in that scenario? We will never know, because of Katherine Goble, an African American woman, played a key role in helping the US get there first. You could make a strong case that it was her presence on the team which gave the US space program the competitive advantage in the global space race.
What strikes me about this line of thinking is how workplace diversity is a competitive advantage and should be viewed as such. When we hear about companies trying to become more diverse, they appear to do so to come across mostly as an equal opportunity employer. However, if companies viewed workplace diversity as a competitive advantage than it is in their, and their shareholders best interests to aggressively pursue this course of action.
Interestingly, there is an increasing body of evidence to show that ethnic and gender diverse teams tend to be more creative and solve problems better than ones that are not. Many psychologists have been working and studying what makes teams more effective than others, and interestingly, psychologist Christopher Chabris co-authored this article in the NYTimes titled Why Some Teams are Smarter than Others. While there was a range of factors contributing to teams success but one of the three tentpole reasons was not just diversity but teams with more women than men.
Christopher Chabris has been writing research studies and working with the social science community digging into the broader theme of collective intelligence. And as many many new research reports have begun to suggest, the collective intelligence of a group gets better when there is diversity (broadly defined). And what is a company than a giant group of collective intelligence with the same sets of goals in mind? If these studies are correct than diversity will play a role, not just in the teams competitive advantage, but for the entire collective intelligence of any workforce.
This is not a new idea as I’m not the first the position diversity as an important element in companies, or even a nation’s, competitive advantage but it is a theme worth remembering and cementing into the mindset and culture of an institution. The reality is, however, just having diversity as a goal is not enough. A company has to have a process to utilize their diversity effectively.
The NASA space program was on the verge of squashing their competitive advantage by not effectively empowering Katherine Goble to do what she does best. Equally important were the team members in the NASA program’s ability to be willing to listen and accept her ideas and input. Had the NASA Space Programs boss Al Harrison, not stepped in and empowered Katerhine as a part of the team there is a good chance the US may have lost the Space race to Russia. Having diversity on teams and having procedures in place that empower that diversity can lead to a powerful competitive advantage.
If more CEO’s and executive teams understood how diversity is a competitive advantage, then they would rush to be as diverse as possible because of a competitor, who does understand this, becomes a much bigger threat.
Another factor to consider here is from the vantage point of competitive advantage for a nation. Not only should this point apply to a Nation’s leadership structure to include diversity but, the policies that a nation puts in place which could become an inhibitor to create diverse companies. For example, a concern with the current political climate in the US could threaten more top talent from other nations and ethnic backgrounds to leave the US and go to other count.ries to start and join companies.
Treating diversity among a company workforce, and management teams should be understood as competitive advantage as much of the research suggests. Hopefully, companies in every industry start understanding this additional angle as another reason to aggressively pursue a diverse culture in their companies.
One of the things that stood out in some recent analysis I did of media consumption trends is how the plethora of options available to consumers to consumer TV content the more of it they seem to consume. It seems obvious that if consumers could consume their TV content on any device they choose, at any time they choose, they would consume as much of it they can. Nobody wanted their TV stuck, well, on their TV.
From cars to computers to connectivity, speed is an attractive quality to many people. I mean, who can’t appreciate devices or services that help you get things done more quickly?
While raw semiconductor chip performance has typically been—and still is—a critical enabler of fast tech devices, in many instances, it’s actually the speed of connectivity that determines their overall performance. This is especially true with the ongoing transition to cloud-based services.
The problem is, measuring connectivity speed isn’t a straightforward process. Sure, you can look for connectivity-related specs for your devices, or run online speed tests (like Speedtest.net), but very few really understand the former and, as anyone who has tried the latter knows, the results can vary widely, even throughout the course of a single day.
The simple truth is, for a lot of people, connectivity is black magic. Sure, most people have heard about different generations of cellular technology, such as 4G or the forthcoming 5G, and many even have some inkling of different WiFi standards (802.11n, 11ac,11ad, etc.). Understanding how or why your device feels fast doing a particular online task one day and on other days it doesn’t, however, well, that’s still a mystery.
Part of the reason for this confusion is that the underlying technology (and the terminology associated with it) is very complex. Wireless connectivity is a fundamentally difficult task that involves not only complex digital efforts from very sophisticated silicon components, but a layer of analog circuitry that’s tied to antennas and physical waveforms, as well as interactions with objects in the real world. Frankly, it’s amazing that it all works as well as it does.
Ironically, despite its complexity, connectivity is also something that we’ve started to take for granted, particularly in more advanced environments like the US and Western Europe. Instead of being grateful for having the kinds of speedy connections that are available to us, we’re annoyed when fast, reliable connectivity isn’t there.
As the result of all these factors, connectivity has been relegated to second-class status by many, overshadowed by talk of CPUs, GPUs, and other types of new semiconductor chip architectures. Modems, however, were arguably one of the first specialty accelerator chips, and play a more significant role than many realize. Similarly, WiFi controller chips offer significant connectivity benefits, but are typically seen as basic table stakes—not something upon which critical product distinctions or buying decisions are made.
People are starting to finally figure out how important connectivity is when it comes to their devices, however, and that’s starting to drive a different perspective around communications-focused components. One of the key driving factors for this is the evolution of wireless connectivity to speeds above 1 gigabit per second (1 Gbps). Just as the transition to 1 GHz processors was a key milestone in the evolution of CPUs, so too has the appearance of 1 Gbps wireless connectivity options enabled a new perspective on communications components such as modems and WiFi controllers.
Chipmaker Qualcomm was one of the first to talk about both Gigabit LTE for cellular broadband modems, as well as greater than 1 Gbps speeds for 802.11ac (in the 5 GHz band) and 802.11ad (in the distance-constrained 60 GHz band). Earlier this year, Qualcomm demonstrated Gigabit LTE in Australia with local Aussie carrier Telstra, and just last month, they showed offer similar technology here in the US with T-Mobile. In both cases, they were using a combination of Snapdragon 835-equipped phones—such as Samsung’s S8—which feature a Category 16 (Cat16) modem, and upgraded cellular telecom equipment from telecom equipment providers, such as Ericsson. The company also just unveiled their new Snapdragon 845 chip, expected to ship in smartphones in later 2018, that offers an even faster Cat18 modem, with maximum download speed of 1.2 Gbps.
In the case of both faster LTE and faster WiFi, communications component vendors like Qualcomm have to deploy a variety of sophisticated technologies, such as MU-MIMO (multi-user, multiple input, multiple output) transmission and antenna technologies, and 256 QAM data modulation (e.g., compression) schemes, among others.
The net result is extremely fast connection speeds that can (and likely will) have a dramatic impact on the types of cloud-based services that can be made available, as well as our quality of experience with them. There’s no denying that the technology behind these speedy connections is complicated, but with the dawn of the gigabit connectivity era, it’s time to at least acknowledge the impressive benefits these speedy connections provide.
One of the areas of great interest to me over the last five years has been the movement behind helping kids gain interest in Science, Technology, Engineering, and Math. Recently, the groups advocating for STEM education have added an “A” to this moniker arguing that the ART’s and creativity are also important to round out a tech-focused education and thus the newer acronym STEAM has been added to describe this educational focus.
My Interest in STEM and STEAM has led me over the years to look for STEM gifts during the holidays for my two granddaughters and nieces and nephews and compile a short list of products that would make great gifts for both boys and girls.
Here are a few of the products for this year that I believe will help any kid gain more interest in STEM and STEAM and could provide hours of learning fun and perhaps get them interested in STEM and STEAM careers in the future.
I like this one for the Under five age group with a high tech take on learning their ABC’s- Codebabies’ ABCs of the Web picture book. Start early with this alphabet picture book — written by a web designer — that aims to introduce the under-fives to “the language of the web”. So instead of ‘A is for Aardvark’ you get ‘A is for Anchor tag’… Link
Smart Gurlz (recently seen on Shark Tank) currently available online only. Teaches girls how to code using self-balancing robots and action dolls via mobile devices. SmartGurlz helps girls 6 and up and is a great way to get them interested in STEM.
Stembox_ Subscription service. Available in 3,6 or 12-month subscriptions and priced between $87 and $300. StemBox is also geared to girls and designed to be engaging for ages 8 to 13. It helps them develop an emotional connection to STEM and hopefully encourages them to gain greater interest in the Sciences.
Creation Crate. Creation Crate drops the technical lingo and increases in difficulty each month so that users can be fluent in the language of technology by the end of the 24-month curriculum. Projects range from building a mood lamp to a memory game focused on programming, to learning how to input a distance reading from an ultrasonic sensor. Unlike other technology subscription boxes, they use raw electronic components and offer users real-world skills. These boxes designed to be beginner friendly with no previous experience needed. Subscriptions start at $30 a month with 3, 6, or 12-month subscription packages to choose from.
Wonder Workshop. Wonder Workshop uses what they call CleverBot’s to teach early robotics and interactive programming. They are packed with technology that helps kids develop critical problem-solving skills through challenging educational projects designed to make learning to code fun. Most of their Bots are for ages 11+.
Thimble. Thimble also uses electronics to tech robotics and programming and a 1, 3, 6 or 12-month subscription service. There are a dozen projects to choose from, and in each project, you get the proper components, an online learning platform, and they even have a forum for kids to exchange ideas, collaborate and help each other.
KiwiCo – Subscription service of STEAM (offers Art as well) Offers a range of products for infants to high school. Offers Monthly, 3,6, and 12-month subscriptions for about $20/month. This is a much broader service for all age groups, and you can pick the projects you want to work on. For the 24-36-month-old kids, the focus is exploring and learn. For ages 3-4 the projects are about playing and learning. Ages 5-8 have projects aimed at science and art, and ages 9-16 include projects for art, design, science, and engineering.
Circuit Cubes from TenkaLabs. Circuit Cubes teach kids the basics of circuitry while they’re engaged in creative STEM play. Kids learn how to complete circuits to light an LED, power a geared motor, and how serial and parallel circuits create different effects in their projects. They integrate with LEGO-style bricks for endless projects.” Ages 8-12.
Barbie STEM kit – Thames & Kosmos/Mattel ages 4-8. When my granddaughters were younger they were Barbie fans and would have loved these Barbie STEM kits. There are seven different projects to work with the kit, ranging from a spinning closet rack to a gear based washing machine and a greenhouse. They even have some specialty kits including Barbie Crystal Geology and Barbie Fundamental Chemistry set. One of the great examples of learning about STEM while playing with a beloved figure.
Code Kit from LittleBits. Since I first heard about LittleBits, I have been a big fan of their STEM kits. One new STEM kit from them that are geared towards learning about electronics is this Code Kit of snap together magnetic Arduino modules of “bits.” The idea is to simplify breadboarding and never need to get out the soldering iron. The bits are then connected — via computer — with another block based graphical coding environment so kids can play around with and program the hardware.
Lego Boost Creative Toolbox building & coding kit. What kid does not like Lego Blocks? Lego understands the STEM movement well and has created the Lego Boost Creative Tool box which is a robotics and programming system aimed at seven year and older kids. With this toolkit, kids can build and customize a robot and learn how to code its movements and navigations. It has drag and drop icons for easy programming and teaches kids the basics of robotics and coding.
Last but not the least is one of my favorites:
STEAM Kids ebook. “A year’s worth of captivating STEAM (Science, Technology, Engineering, Art & Math) activities that should provide hours of fun. This is a downloadable book with projects in each area designed to engage parents and children in new areas of discovery and skills. Books are sold individually or in bundles, including specific books for Holiday themed projects, (I.e., Christmas, Valentine’s Day, etc. For ages 4-12. $14.99 Comes in both eBook and traditional book formats.
At the Always-Connected PC launch event earlier this week, Microsoft and Qualcomm seemed to focus a great deal of their attention on the consumer opportunity for these new Snapdragon-based Windows computers. While there is certainly a market for this technology among some percentage of consumers, I would argue that the larger near-term opportunity is in the commercial segment where connectivity and long battery-life drive real-world productivity gains and measurable cost benefits.
Connected Consumer? Carolina Milanesi discussed the launch event in detail earlier this week, including some of the ongoing app issues Microsoft faces as well as the challenges associated with convincing consumers to pay for the carrier service required for an always-connected PC. Beyond these roadblocks, there’s this additional fundamental issue: Many consumers, with students being the exception, tend to use their PCs in one place: Inside their house. In other words, ultra-long battery life, and LTE connectivity are both nice to have, but not critical to a large percentage of consumer PC users.
However, for highly mobile workers, those two features are the holy grail of productivity. I travel extensively for work, and while today’s modern PCs offer substantially more battery life than ever before, I still often find myself working around my PC’s battery limitations. Sometimes it’s a 13-hour trip to Asia, where I do the important work up front, constantly eyeing the battery life indicator as it slides toward zero. Other times it’s running from one presentation to another, invariably forced to plug in before the last meeting, so the PC doesn’t die mid-presentation. The idea of a notebook that runs for 20 hours between charges is a game changer for users like me. The prospect of going days at a time between charges sounds almost too good to be true.
Likewise, there’s the issue of connectivity. Invariably somebody will point out that you can always connect to your phone as a hotspot, and yes that is an option. But it’s a task that takes time and effort to do, which can be problematic in some back-to-back meeting scenarios. And when you’re connecting like this, in an ad hoc way, everything must update at once, which means a flood of emails, etc. And tethering invariably leads to a secondary issue: Running down your smartphone battery. After years of carrying an LTE-enabled iPad, the benefits of an integrated LTE connection are quite clear to me.
Another interesting feature of these new PCs is their instant-on capability. Today’s PCs boot up and resume from sleep much faster than ever before, but they’re still far from instantaneous. The idea of a PC that wakes at the speed of a smartphone has clear productivity benefits.
Cost Savings and Challenges So it’s clear that a subset of commercial users would embrace the opportunity to use an Always-Connected PC. Convincing their companies these devices are a cost-effective idea is the next challenge. But that’s not difficult when you can articulate the productivity advantages of outfitting high-output mobile employees with these devices. And yes, there is a monthly cost associated with connecting them to the network, but that cost can be rather quickly justified when you consider the ongoing costs many employees accrue while traveling and connecting to fee-based WiFi networks in hotels and other locations. Plus, there are the real-world security issues associated with connecting to random WiFi networks in the wild. And an LTE notebook might also drive cost savings for companies who have full-time remote employees that currently expense their home office broadband connections.
Probably the bigger challenge here is convincing old-school IT departments to try a non-Intel/non VPro-enabled Windows PC. These folks will also likely balk at the idea of Windows 10 S (the shipping OS on the initial launch devices, which is upgradeable to Windows 10 Pro). Some will also cringe when they hear that 32-bit X86 apps run via emulation (and 64-bit apps aren’t compatible). Finally—and this is the most reasonable pushback—many will need to see real-world benchmarks that prove these systems are competitive with today’s X86-based systems for the use cases in question.
While some of these IT departments will likely pilot some of these new consumer-focused products, others will undoubtedly wait until Microsoft, Qualcomm, and their hardware partners move to ship more commercial-focused products. Others will undoubtedly wait to see how commercial LTE-enabled systems based on Intel’s 8th generation processors compare to Windows on Qualcomm. And that may well be the most exciting result of the news this week. With Qualcomm focused on the Windows PC segment, AMD resurgent in the space, and Intel working hard to sustain its position, all Windows PC users—consumer and commercial—will eventually benefit, and I can’t wait to test the first systems. Likewise, it will be interesting to see the eventual response from competing platforms such as Google’s Chrome OS and Apple’s MacOS.
Microsoft Whiteboard At the Surface Pro launch a few months back, Microsoft had previewed Whiteboard, a Windows 10 app designed to offer creative and business collaboration across devices. Up until this week, the app had been available in private beta, but starting this past Tuesday, Microsoft kicked off a public beta for anyone with a Windows 10 device..
Long-time readers of my column will know that I suffered a heart attack in 2012 and underwent a triple bypass. As you can imagine, this was a serious operation brought on by long hours, extensive travel, not eating correctly and minimal exercise over a 25+ year period. The good news is that when the heart attack struck, I knew what was happening and got to the hospital in time for them to stabilize me and start preparing me for open-heart surgery within 36 hours of the actual attack.
But from that point on I was and still am a heart patient. Even though the surgery corrected the main issues with three of my arteries, I am still an at-risk person and have to closely monitor things like blood pressure, cholesterol, heartbeat, etc. One other thing that could be an issue but hasn’t been so far being something called AFIB, or an irregular heart beat that could lead to other serious issues related to my heart and health. AFIB is the leading cause of strokes and is responsible for approximately 130,000 deaths and 750,000 hospitalizations in the US every year.
Until recently, the only way I could get this tested was to go to my doctor’s office, which I do twice a year and have an EKG which charts my heart rate and looks for any abnormalities such as AFIB. But earlier this year I was sent a product from AliveCor to test. The device has a small mobile device in which I can put my fingers or thumbs on it, and it registers my heart rate in detail and delivers a signal to my iPhone that gives me an actual EKG reading.
This device is FDA approved and allows me to take a personal EKG to check for AFIB or any heartbeat irregularities anytime I want. This mobile solution also has an important option to get an expert to read the EKG should you see something in the chart that looks different or abnormal. The two options are to have a clinician read it and give feedback for $9.99 or get an actual MD to look at it and advise for $19.99. Thankfully, all of my readings over the year were normal, and I have not had to call for outside analysis.
On Nov 30, AliveCor introduced a new way to do this in the form of a watch band that is tied to the Apple Watch. While their KardiaMobile reader works well, it is another thing I have to carry with me if I am going to do this daily and especially while on the road. Called the KardiaBBand, it sells for $199 and requires a $99.00 a year subscription, but I consider this a small price to pay for the ability to have early warnings of AFIB and the ability to do an EKG easily and anytime I want. I have been testing the Kardia Band for about a week now, and like the KardiaMobile device, it monitors my heart rate and via the KardiaBand, it gives me an EKG reading on demand. But since I am wearing the band, it is a bit easier than digging out the Kardia Mobile device and using it, which means I can get readings more often to stay on top of my overall heart health.
I realize that this probably will get more attention from an older audience or people with Type 2 Diabetes and high blood pressure as AFIB is a leading cause of strokes and watching for any changes in EKG readings can and will save lives of high-risk people. However, I have friends who had a stroke in their 20’s and 30’s, and if any heart disease runs in your family, the KardiaMobile reader, which costs $99 or the Kardia Band needs to be considered as part of your overall health monitoring program.
Also on Nov 30, Apple introduced a most important heart study they are doing in conjunction with Stanford that uses the Apple Watch to do a similar EKG like a test to check specifically for AFIB. https://www.apple.com/newsroom/2017/11/apple-heart-study-launches-to-identify-irregular-heart-rhythms/
Since this is a study it does not need FDA approval but the program does provide a direct contact with a physician should the Apple Watch, through this special study monitoring program, detect any abnormalities in your heart readings. At this point, you will be notified that there might be a problem and they will send a special patch that you wear for seven days to monitor your heart readings 24/7 to get a more concise analysis. If AFIB is detected, they will have you see a Dr or Cardiologist as soon as possible.
According to Apple, “To calculate heart rate and rhythm, Apple Watch’s sensor uses green LED lights flashing hundreds of times per second and light-sensitive photodiodes to detect the amount of blood flowing through the wrist. The sensor’s unique optical design gathers signals from four distinct points on the wrist, and when combined with powerful software algorithms, Apple Watch isolates heart rhythms from other noise. The Apple Heart Study app uses this technology to identify an irregular heart rhythm.”
Apple’s interest in health stems from Steve Jobs own health issues. As he became more in tune with the importance of a person needing to find more proactive ways to impact and monitor their health, he started to make this one of the tenets of Apple’s overall vision. As I have often written over the last few years, Apple is serious about helping their customers staying healthy, and this Heart Study is another sign of that commitment.
I’m not sure if there is a misunderstanding of the smart speaker category or if it is just because of how I have my news, analysis, and Twitter feed curated. Whichever the case, I’d like to elaborate on how I think about this space now that we have a few years of market intelligence and consumer behavior with these products in our database.
In retrospect, we should have seen this category coming from a mile away. To more greatly understand why this category took off so quickly all we need to do is look back at the iPod, perhaps even further to the Walkman. The value of a personal and mobile music collection was a driving force behind some of the most successful consumer electronics products up until the smartphone. To this day, listening to that music collection on smartphones remains one of the most common use cases. Bottom line, music matters a lot to consumers.
If you keep an eye on what the financial analysts are saying about Apple these days, you know that almost all have raised their stock price targets closer and closer to the $200 per share range. Almost all are bullish, and some believe Apple’s new fiscal year will break all records and that we could see Apple become the first company ever with a trillion dollar valuation sometime in 2018.
This week, at the Snapdragon Summit, Qualcomm and Microsoft launched their Always Connected PC initiative. This is not the first time we hear about connected PCs. Cellular connectivity has been available on PCs for years, and thus far penetration among consumers has been relatively low. There are of course regional differences in markets like Europe where WiFi connectivity is both hard to come by and expensive. But overall, consumers seem to be happy to use their phone hotspot for those time they really need to be connected.
When it comes to tablets, connectivity has mattered more to consumers especially in those regions where you can either add a tablet to your data plan for as little as $10, or you can have a separate data plan with no contract obligations. iPads have had a reasonably high attach rate of cellular connection with our data showing numbers as high as 49% in the US. Yet, of those devices only 45% have an active data plan associated with them. The most significant driver (52%) for these consumers is the peace of mind of always having connectivity just in case they need it. And peace of mind as well as convenience are always big drivers!
With the holiday shopping in full swing, PCs vendors are looking for ways to entice consumers to spend their holiday budget on a new PC. Intel has been showing you how all the new technologies like 4K gaming and video, as well as VR will not be available to you unless you invest in a new PC. And soon Microsoft, Qualcomm, and their partners will be busy talking about the joys of the Always Connected PC.
The Always Connected tagline is not limited to cellular connectivity. It also speaks to a PC that has instant-on and a long battery life. It promises to deliver a computing experience that will come with you wherever you are and that will free you from looking for a power source, being dependent on free unsecured Wi-Fi or jumping through hoops to connect to your phone. While somewhat reliant on what kind of offers we will see from carriers who would not want to be free to work or play anytime anywhere? This will be particularly true if devices start shipping with a free connection trial so users will get hooked on that convenience and peace of mind
Apps & Services drive the Need for Connectivity
If you have seen the latest iPad commercial and can relate to it, you might have already bought into the promise of an always connected computing experience. It is ironic that Apple is helping sell the vision that Microsoft, Qualcomm and their partners want to deliver. Except, of course, Apple is also telling you that your always connected life does not require a PC.
And here is the heart of the matter. For consumers, the desire to be connected has little to do with being productive and a lot to do with getting “stuff” done whenever we want. That stuff can range from streaming music, to upload to social media, to playing online games, to shopping online…. basically being able to do the same things we do on our smartphones but with the advantage of having a larger screen and a keyboard. Forty-three percent of the consumers we interviewed who have a connected iPad said they do a little bit of everything. This does not mean we will carry our smartphones less or rely on them any less. It simply means we will have the option to choose the best tool for the job without having to compromise on connectivity and battery life.
While connectivity and battery life will no longer be in question, the Always Connected PC must deliver on the variety of apps and services we can access with it. This will require a stronger investment in the Windows App Store than what we have seen so far from Microsoft, especially as they try to position Windows 10 S – which is fully reliant on store apps – as the most modern computing experience.
The Windows App Store was the weakest link for Windows Mobile, and it cannot be the weakest link for the Always Connected PC, or for Windows 10 S, for that matter. It would be a terrible mistake to think that being able to be connected only for productivity reasons will be enough of a drive to see consumers flock to stores and buy these devices. I am sure both OEMs and carriers have learned a lot from the netbook experiment. Not just in terms of design and marketing but also in terms of the value proposition that consumers must see in a device.
An Opportunity for Phone Manufacturers to Broaden their Scope
Traditional PC manufacturers are continuing to look for new drivers to fuel sales and Always Connected PCs is just another way to get consumers’ attention. Yet, some might be a little shy in investing too much in this segment seen how the netbook and Windows RT experiments ended. At the Snapdragon Summit we saw devices from Asus and HP and Lenovo was mentioned as having a device in time for CES. The challenge for pure PC manufacturer rests on the balance of supporting all connected PCs, Intel as well as Qualcomm based ones, as well as to help consumers decide between their full product portfolios.
Given Always Connected PCs will speak more to highly mobile users, I see a great opportunity for phone manufacturers such as Samsung and Huawei to invest in these devices to widen their reach. They have the relationship with the carriers and with Qualcomm as well as their own semiconductor capabilities. Plus they do not have to figure out where to place these products within a wider PC offering. Samsung, in particular, with its trusted Galaxy brand, might be seen by many consumers as a natural choice for an Always Connected PC.
Samsung has played in the PC space at a world-wide level with few devices but has not really put much marketing push behind its effort. This initiative might indeed offer a good opportunity to try a more aggressive approach without having to commit to becoming an all around PC vendor. Samsung could, of course, consider the enterprise market as well as its ambitions of delivering Knox as a full-fledged platform strengthen. Yet, this road will require a tighter collaboration with Microsoft than we have seen thus far.
Consumers will not care about who empowers their Connectivity
Connectivity must come with the right design, the right marketing and most of all at the right price point. What the right price point will be is heavily dependent on the value buyers will see in the experience that is delivered to them. What will not matter to consumers is how the connectivity is delivered and we already know that while the Always Connected PC effort is now driven by Microsoft and Qualcomm, Intel will be jumping on the bandwagon too while trying to position their solution as superior.
What will ultimately matter to consumers when choosing the solution remains to be seen. Will consumer trust Qualcomm, who is responsible for their everyday connectivity on their phones? Or will consumers be looking for an Intel Inside logo as they always do when they buy a new PC? Hard to say at this point, but two things are clear: Microsoft and Qualcomm must invest in building a differentiated value proposition and they must help consumer understand what it is they are buying into.
The value proposition of Always Connected PCs might revolve around positioning these devices closer to a smartphone than a traditional PC. The freedom of a phone experience when it comes to things to do, battery, ease of connectivity, coupled with a bigger screen, a modern PC OS and a highly mobile form factor is what consumers are looking for. A solution that if implemented right, might even have Windows users question what a PC really is, as they embrace a modern computing experience.
Virtually everyone who closely watches the tech industry has heard venture capitalist Marc Andreesen’s famous quote about “software eating the world.” The implication, of course, is that software plays the most important role in tech and the capabilities of software are the only ones that really matter. In addition, there’s the further suggestion that the only way to really make money in tech is with software.
While I won’t disagree with the underlying principles, I am starting to wonder if what we’ve traditionally thought of as software will really continue to exist several years into the future. It’s not that there won’t be code running on hardware devices of all types, but the way it’s packaged, sold, discussed, and even developed is on the cusp of some radical transformations.
In fact, there have already been substantial changes to the traditional types of software that were so dominant in the tech industry for decades: operating systems and applications.
Operating systems (OS’s) used to be considered kings of the software hill. Not only did they sit at the heart of client devices, servers and virtually every intelligent device ever created, they also enabled the all-powerful ecosystems. It was their structure, rules, APIs and other tools that enabled 3rd party companies to create applications, utilities, add-ons, and other software pieces that turned OS’s into platforms.
While those structures remain in place, the world around us has evolved to include multiple important OS options. In addition, though there are certainly important differences between OS choices across different types of devices, most application vendors have had to focus on the commonality across platforms, rather than those unique differences, leading to applications that run across multiple platforms. For this, and many other reasons, platforms and specific operating systems have lost much of their value. Yes, they still serve an important purpose, but they are no longer the sole arbiters of what kinds of applications can be built.
Applications have also seen dramatic transformations. Gone are the days of large, monolithic applications that only run on certain platforms. They’ve been replaced by smaller “apps” that run across a variety of different platforms. From a business model perspective, we’ve gone from standalone applications costing hundreds of dollars to single digit dollar mobile apps to completely free apps that rely on services and subscriptions to make money.
Even in the world of large applications, there’s been a dramatic shift to subscription-driven pricing, with Microsoft’s Office 365 and Adobe’s Creative Cloud services being some of the most popular. Not all end users are excited about this model, but it seems clear that’s where traditional applications are heading.
Service and subscription-driven models have also come to mobile clients, servers and other devices, as companies have realized that the continuous flow of smaller amounts of regular income provided by these models (as opposed to large lump sum purchases) offers much more stable revenues.
Even the structure of software has changed, with large applications being broken down into smaller chunks that can act independently, but work together to provide the functionality of a full application. This notion of containers (or chunks of code that function as independent software objects) is particularly prevalent among cloud-based applications, but it’s not hard to imagine it being applied to device-based applications as well. In addition to their other benefits, containers bring with them platform and physical location independence and portability, two key attributes that will be essential for new types of computing architectures—such as edge computing—which are widely expected to dramatically influence many future tech developments.
Another benefit of containers is reusability, meaning they could be leveraged across multiple applications. While this is certainly interesting, is does start to raise questions around complexity and monetization for containers, that don’t yet have easy answers.
There are even growing questions about what really constitutes software as we know it. Technically, building voice-based “skills” for an Amazon Echo-based product is software design, but the manner with which people interact with skills is much different than how they’ve interacted with other types of software. As digital assistant models continue to evolve, the nature of how these component-like pieces are integrated into the assistant platform will also likely change. Plus, as with containers, though some new experiments have been started, there are still serious questions about how this type of code can be monetized.
Finally, and most importantly, virtually everyone is adding in Artificial Intelligence (AI) and machine learning capabilities into their software code. Right now, much of these additions are relatively simple pattern-recognition based functions, but the future is likely to be driven by software that, in many ways, can start to rewrite itself as it learns these patterns and adjusts appropriately. This obviously marks a significant shift in the normal software development process, and it remains to be seen how companies will try to package and sell these capabilities.
Taken together, the implications for all of these software-related developments are profound. In fact, one could argue that software is being “eaten” by services. That’s already occurring in several areas (think Software as a Service, or SaaS), and the future of code-based capabilities will likely all be delivered through some type of monetized service offering. While that may be appealing in some ways, there is a legitimate question about how many services any person, or any company, will be willing to sign up for. Particularly when there are costs related to these services—we need to realistically recognize that this business model can only be taken so far.
Watching the tech industry evolve over the last several decades, it’s fascinating to see how many pendulum shifts occur across many different segments. From computing paradigms to semiconductor architectures to the role and balance between hardware, software and services, it seems that what was once old can quickly become new again. In the case of software—which used to be bundled for free with early computing hardware—we may be coming full circle, with most code soon becoming little more than a means to sell services that leverage its capabilities. It certainly won’t happen overnight, but the end of software as we know it may be sooner than we think.
One of the larger themes I’ve been integrating in industry conversations with tech leaders and executives is how giving computers the ability to see and hear us will define a new era of computing. All the advancements in machine learning the past few years have set a new foundation to give computers the ability to learn on their own. A computers ability to learn depends entirely on the information available to it. This data has largely been textual but we are on the cusp of the computing being able to utilize all its sensors to learn as well. This includes the microphone, camera sensor, GPS, accelerometer, and any other host of optional sensors available to integrate into our computers. I do believe, however, the camera sensor will be the one which will lead to the most obvious customer value of machine learning and AI.
We all know that a competitive market is one that is healthy. Multiple options for your PC, your smartphone, and yes, your server, mean that every party involved needs to be more aggressive in development to outdo the competitor. For many long years, that type of environment did not exist in the server space and Intel was able to dominate the field with the Xeon processor family and almost no pressure from outside companies.
AMD announced its EPYC processor family this past summer and though it always takes time for adoption and ramp of a new enterprise-class technology, there has been more angst to see retail-ready releases from this launch than any other. Many questioned AMD’s ability to re-enter the server market with Intel’s 99%+ market share and strong grip on the hardware and software ecosystem. Any noise or promotion that might come from partners would be welcome and required for the community at large to have confidence in AMD’s claims.
HPE Claims Performance Records with EPYC
Last month we got one of our first, and likely most important to date. HPE not only announced its second family of servers that would be integrating AMD EPYC processors but did so with a press release touting record breaking performance and impressive claims across the board. For those that want the details: the HPE ProLiant DL385 Gen10 system with dual-socket EPYC processors broke records for two-socket systems on SPECrate2017_fp_base and SPECfp_rate2006. In short, the server showed impressive scaling in floating point performance by combining a pair of 32-core EPYC processors in this industry standard benchmark.
Additionally, HPE claimed that this platform would offer “up to 50% lower cost per virtual machine” compared to other dual-socket servers, thanks in part to the 4TB of addressable system memory and 128 lanes of available PCI Express provided by the AMD CPUs.
This is just one server family, and just one OEM (a big one), but this marks another milestone in AMD’s march back to relevancy in the server space. AMD CEO Lisa Su cautioned me recently that this would be a slow and arduous process, even if AMD was excited about the rate of adoptions it was seeing. It tells me that AMD is doing the right things, working with the right people, and has the right mindset and aggressive stance to make waves in the enterprise space once again.
Intel is Paying Attention
For its part, Intel is taking notice. Though it has a 99% market share in the server and data center space, Intel went to press last week with internal testing that compares its own Xeon Scalable processor family against the AMD EPYC platform. It runs through a myriad of benchmarks and concludes that Intel still offers an advantage in performance per core and many of the workloads and benchmarks that server professionals look to for guidance. Intel also questioned the performance consistency of AMD EPYC processors because of the complications surrounding its multi-die approach to core scaling (as opposed to a single, monolithic die that Intel utilizes).
It’s unlikely that any of the results that Intel presented to the media are “wrong” but the importance of the effort in my mind, is that Intel felt pressured enough to address this in a public fashion. All companies do competitive analysis on systems and hardware but rarely is that data presented in such a fashion to essentially “call out” that competing company and the coverage by media and analysts. It means that Intel sees a threat and is taking it seriously – something it hasn’t done in this space for nearly a decade.
AMD was upfront during its launch of EPYC that it would do very well in specific areas of the enterprise and datacenter space, but in others it would be behind what Intel can offer. That still seems like an accurate assessment though Intel is doing some of the heavy lifting to indicate where those “other” areas are. I still view EPYC as competitive in enough areas to retain its original value proposition and it appears that partners like Supermicro and HPE agree.
As we move forward, the future in the server space is brighter for customers thanks to a competitive landscape. Intel executives and financial bottom lines won’t appreciate any drop from the near-100% market share, obviously, but for the rest of us, seeing a healthy and active AMD in this space is a critical piece of the story of improvement and scalability for the datacenter. AMD should continue to see customer adoption and a resulting improvement in the financial status of the enterprise business unit
Within the past two weeks, the Department of Justice announced that it intends to block AT&T’s proposed acquisition of Time Warner, and the FCC’s new Chairman Ajit Pai announced its intention to repeal network neutrality. These actions, when looked at together, reveal a mixed, and inconsistent signal from our government. How so?
To begin with, let’s recall that Pai declined to review the merger, leaving it in the hands of the DOJ. Now, one of the concerns that the AT&T-TW deal has raised is that AT&T could potentially advantage its subscribers by zero-rating services such as HBO. But a strict application of network neutrality rules could be used to block such practices. Doing away with network neutrality does remove a potential check and balance to the sort of behavior that the DOJ is concerned about.
That said, the FCC, even with network neutrality in place, has not gotten in the way of zero-rating, whether from AT&T-DTV, T-Mobile BingeOn, and so on. And while zero-rating video content for AT&T customers who get DTV is a ‘feature’ that has undoubtedly led to some subscriber gains, there hasn’t exactly been an outcry from competitors, or much evidence that consumers are being ‘harmed’ by such practices. I think we recognize that this is just part of the rapidly altering telecom/media landscape, which is leading to much experimentation with business models. As an aside, how is AT&T zero-rating video services for its subs any different than Amazon offering certain content free to its Prime subscribers?
The DOJ makes a lot of assumptions about what AT&T would do if it acquires TW. If it’s concerned, why not impose conditions or some form of oversight? The two most valuable properties, CNN and HBO, are in tens of millions of homes on other pay TV services and cord cutter services. Plus, anyone with a broadband connection can get HBO on a standalone basis, which is a refreshing departure from the historic practice of it being tied to a pay TV subscription. This is evidence alone to the DOJ that the landscape has changed. If AT&T took any action on discriminatory pricing for HBO, or, in the most extreme case, blocked competing services from carrying it, the harm to HBO’s business would be much greater than the advantage gained by AT&T.
(With regard to the AT&T-Time Warner deal, I’m assuming that the DOJ is acting on its own and that there’s not any White House because-of-CNN influence that would affect a rational calculus. On that front, I’m sort of surprised Trump hasn’t made a similar case with HBO, since John Oliver has been far harsher on the president than CNN.)
Now, I’m not expecting the FCC and the DOJ to ‘coordinate’, since they have different charters. The DOJ’s primary mission here is antitrust, not setting telecom policy. But it’s antitrust ‘lens’ must take into account changes in the technology and media landscape. It did allow the Comcast acquisition of NBC Universal, with conditions on the sorts of potential practices it’s ostensibly concerned about with regards to AT&T-TW. And what has happened since then? Tens of millions of consumers have cut the cord, there are now several viable competitors to the traditional cable ‘pay TV’ model, and there is no evidence that Comcast has engaged in any unseemly practice with regard to competitors carrying its content properties. Ever more so, whereas Comcast might view Netflix as the ‘enemy’, in that it’s one of the reasons some folks cut the cord or downgrade to a skinny bundle, the company has instead integrated Netflix quite nicely into its X1 interface, thank you very much.
This tells us that looking at telecom and media from a 1934 (Communications Act), 1996 (Telecom Act), or even a 2005 lens (pre-iPhone, pre-LTE, pre-streaming), is outmoded. Rather than appearing to be in-line with the ‘reverse everything Obama did’ motif of the Trump administration, perhaps it’s time for the FCC to take a broader look at network neutrality within the context of a broader revisit of the Telecom Act. This would allow the FCC to account for some of the massive changes occurring in telecom/media/internet, and as a byproduct enlighten other government agencies, from the DOJ to the FTC. The outcome would hopefully be sounder, more consistent approach that provides greater flexibility around industry structure, while preserving reasonable protections for consumers.
Google might fold Nest Back into the Devices Team Google is considering folding its home-automation unit Nest Labs into its hardware team, according to people familiar with the talks, reversing a major element of Google’s split two years ago into various businesses under holding company Alphabet.
Developing a hardware product is full of challenges. The process of going from a concept to a finished product is filled with unknowns and takes a lot of time. It’s an iterative process of creating a design, building a prototype, and testing it, repeated a half-dozen times or more before getting it right.
It’s one thing to build a working prototype, but it’s much more difficult to mass produce a product efficiently and reliably so that it meets the customer’s expectations. Maybe that’s why VCs prefer software investments to hardware. But from my experience, there’s nothing more satisfying than creating physical objects that can be sold in the millions around the world.
I recently researched the development times of about 50 consumer electronic products, ranging from smartphones to wearables to printers to audio products, developed by organizations of all sizes. These were standalone products and not simple accessories.
What I found was that the time from the initial industrial design to first customer shipment averaged about 2 1/4 years. Surprisingly, the spread was narrower than I expected, ranging from about 1 1/4 years to about 3, not counting a few outliers at the long end. Smaller companies were sometimes faster than large companies with many more participants, perhaps because a small team of experts can sometimes be more productive than large bureaucratic organizations.
Development time rarely was affected by how complex the product was, because the more complex projects had larger teams. One of the biggest contributions to the development time was the time from completing each design cycle to building and testing how that design functioned. Much of that time was related to fabricating parts, ordering components and just waiting. The second biggest contributor was changing the product requirements after the development process was underway.
What was also evident was that products requiring custom parts took longer than those relying on off-the-shelf components. But the lead time, even for off-the-shelf parts, particularly electronic components and displays, was unpredictable and often meant paying more to buy from the spot market.
But one fact stood out. For those products that were built in very high volumes, the supply chain issues became a major factor. Not only did it effect the schedule to get into production, but it also determined the design approach taken and the materials and the components selected.
A good example are the new iPhones. Early rumors predicted the phones might be made of exotic materials such as ceramic or titanium to make them much more durable. These materials require a considerable infrastructure to fabricate and are not easily scalable to huge volumes.
Instead of Apple, it was Essential, a startup company founded by Andy Rubin, that did just that. They created a phone made of titanium and ceramic. With their sales forecasts being a tiny fraction of Apple’s, they could easily find a supplier and develop a process to meet their needs. Apple was constrained by its own success.
When designers and marketers spec their products, they need to consider the components’ availability, not at the beginning of the project, but when the product will be shipped. But they also need those components during the product’s development, so the life cycle of the component needs to be carefully considered.
While smaller companies may have an easier time of supplying their needs, obsolescence becomes a factor in a product’s development schedule. Many components, including off the shelf electronics and displays, continually are phased out, while new components come online, but often not sequentially.
The larger product companies can more easily get access to their suppliers’ roadmaps that show the details of the phasing in and out of their components. Smaller companies, who may not even be aware the maps exist, are often surprised to find that, as their new product is going into production, one component suddenly goes end of life. The solution is to quickly find a replacement and design it in, often causing unexpected expense and delays to the schedule.
Once you understand the impact of supply chain issues, you’d dismiss those rumors about changes being made to the iPhone design 6 weeks before introduction. Any product being produced in such large volume would need to freeze their design and lock in parts and manufacturing processes many months before production begins. The slightest change would delay production by months.
As difficult as it is to develop hardware products, supply chain issues add to the challenge throughout the design cycle and after the product goes into production. And from my experience, it can be a major element in a product’s development time and in the design, itself.
Few products have caused more controversy than the tablet. The common viewpoint was that tablets were not computers or computer replacements. The overall sentiment around tablets was they were great for media, browsing the Internet, playing games, and overall basic use cases. This may certainly be true of certain products that fall into the tablet category, but that is largely because their screen was small or that their ecosystem lacked the software ecosystem to unlock their full potential as a computer.
Earlier this month, we at Creative Strategies, conducted a study across 1000 US consumers to understand how smartphone owners use the camera on their smartphones. We also wanted to know how much of a driver the quality of the camera is when it comes to buying a new smartphone as well as what else users might want to see added to their camera capabilities.
Long gone are the days when we talked about cameraphones as a sub-segment of the mobile phone market. Today, while the quality might differ, it is almost impossible to find a mobile phone without a camera. And so, as users, we have come to embrace this feature on our phones wholeheartedly. Forty-three percent of our panel said they take pictures with their phones daily and another 32% said they do so weekly.
Apple’s marketing line that the iPhone has become the most used camera in the world rings true in our data showing that 54% of iPhone owners take pictures daily with 77% saying that they take up to 30 pictures a week.
Our Reliance on Cameras is growing
Not only we take pictures often, but we also have a wide range of things we love to take pictures of. Outside of Gen Z, selfies are not a priority for most of us, with only 19% of our panel saying their lovely self is the most likely subject of their picture and another 23% saying they most take pictures of themselves with someone else. Fifty-six percent of the consumers we interviewed said they most often take pictures of sceneries, making it the most popular kind of photograph. Forty-three percent of the panel said they take pictures of their pets the most, with another 37% mentioning their kids as their most popular subjects. If you follow me on Twitter, you already know I do fall into these last categories. Surprisingly, especially if you are on Instagram and pay attention to all those #cameraeatsfirst posts, only 22% of the panel said that food is their most photographed subject.
Interestingly, the second most popular subject for our pictures has very little to do with making memories and a lot to do with just our memory! A whopping 50% of the consumers we interviewed said they most often take pictures of information they need to remember. As camera quality improved, we have been able to take pictures of slides at a presentation, or ingredients on a food packaging, or a receipt in case we lose it, or scan documents that can be saved as PDFs for us to sign or edit. All things that we would not have done with a regular camera, as photography was more of an art form than a practicality.
The Feel-Good Factor of a “Real Camera”
Although the reliance on our smartphone’s camera is growing, not everybody is ready to give up the safety blanket of a real camera. Seventeen percent of the consumers we interviewed said they actively use a DSLR and another 9% actively use a compact point and shoot kind of camera. DSLRs are even more popular with early tech adopters among whom, active usage grows to 30%.
For active users of standalone cameras, the reason to have a dedicated camera stands equally on the ability to have more control over the pictures they take and the belief that a dedicated camera still takes better pictures that a camera on a phone. For DSLR owners, control over their shots is the main driver – 82% calling this reason out.
The Love of Photography does not depend on the Camera You Use
As I was dissecting the data, it became clear that current DSLR users do not feel much different about their smartphone camera as users who solely rely on their phones to take their pictures. This might come as a surprise, as DSLR users are often seen as photography purists and therefore expected not be more critical of technologies that try and replicate the results but not the experience of taking a picture.
First, DSLR users are actually more impressed (42%) than regular smartphone camera users (38%) by the quality of the pictures that we can now snap with our phones. Second, they are appreciative of the fact that smartphone cameras allow them to capture moments in their life in a way that a dedicated camera never did (42%).
Where DSLR users differ from “regular” smartphone users is on their wishlist for which features they would like to see in their next smartphone camera. Both groups want better low-light and better zoom capabilities, but after that, the love for being in charge of your own shot vs. capturing the perfect shot splits the groups. Among panelists who only rely on their phone to take pictures, 46% want to see smarter camera software to help them take the best possible picture, while for DSLR users better image stabilization is a priority at 45%.
DSLR users seem to also be much more engaged with their smartphone camera doing more and on a wider range of activities than consumers who solely rely on their phone.
The most fascinating data point, in my view, when it comes to DSLR users and their love for photography is represented by the fact that for 24% of them the camera is the most important factor driving their smartphone purchase decision. This compares to only 14% across consumers who only rely on their phone for their pictures.
The moral of the story: smartphone cameras might have killed the sales of dedicated cameras but not the love for photographs!
During most of my time covering personal computers, the OS Wars were basically between Apple’s Mac OS and Microsoft’s Windows OS. The battle between these two operating systems has been fierce at times with loyal users on each side swearing by their preferred operating system.
Then came the mobile OS wars, and while at first there were at least 4 or 5 mobile operating systems vying for supremacy, today the real battle is between Apple’s IOS and Google’s Android. But we are about to move into a new area of what I call Mini operating systems wars, designed to work with a new class of CPU’s that run devices at what is called the Edge.