Some fascinating news broke yesterday that Apple is returning to Google for certain aspects of search within iOS and other core software. Apple is returning to having Google as the default search within iOS and macOS as well as for web-based search within Siri. When the iPhone first launched, Google was an essential partner for Apple as the Google search and Google Maps were core features touted as capabilities of the original iPhone.
Sometimes, context and comparison can really make a difference. At the company’s combined Envision and Ignite events in Orlando this week for both business and IT professionals, Microsoft showed off its ability to reach the extremes of computing. It talked about both new low-end (sub-$300) Windows 10 S-based notebooks, as well as entirely new types of computing with a circuit board, prototype device, and programming language built for a functioning quantum computer.
On a practical level, the new Windows 10 S devices coming soon from HP, Lenovo, and Acer are probably a much better iteration of what 10 S-based computers should look like. Recall that Windows 10 S is a “simplified” or cleaned-up version of Windows 10 that can only run modern Windows 10 applications available from the Microsoft store, and was (until now) primarily targeted towards the education market. Specifically, the apps must comply with all the “rules” that Microsoft has defined for the most optimized performance and long-term stability on Windows.
In theory, 10 S is a great idea that can rid the world of problematic applications, allow PCs to run faster and more consistently and, best of all, avoid the inevitable Windows “rot” that slows your computer down as you use it over a period of time. In reality, however, there are a lot of applications that don’t conform to all of Microsoft’s rules—especially in business environments, where custom applications are extremely common.
Not surprisingly, as a result, 10 S has seen relatively little adoption in the enterprise, even though Microsoft initially tried to drive a higher-end view of 10 S by installing it on the pricey Surface Laptop. With this week’s announcement, however, Microsoft is targeting Windows 10 S at what it calls firstline workers—everyone from receptionists, to sales clerks, and 2 billion others who are often the people that first interact with a business’ customers on the front lines. The argument is that many of these workers have more simplistic computing needs, so a less expensive, less powerful, and less flexible device will still be more than sufficient.
While it’s easy to pick apart some elements of Microsoft’s position, frankly, this is the same group of workers that companies who build and sell thin clients have successfully focused on for years. At least with these new Windows 10 S notebooks they get a mobile computer and local storage—two key detractors against thin clients. Plus, it comes at a price point that is actually cheaper than some desktop-only thin clients. Finally, and most importantly, one of the real distinguishing parts of this new offering is a low-cost version of Microsoft 365, which combines Microsoft’s Office 365 productivity applications, along with security and manageability services. Taken together, it’s a pretty compelling package that I think will finally bring some life and opportunity to Windows 10 S in business.
At the other extreme, Microsoft’s announcements on quantum computing were absolutely revolutionary. The company has chosen to follow the path of topological quantum computing—apparently, one of several options being researched around the world—and discussed an array of extremely complex math, physics, and computer science challenges coming together via a 12-year effort.
Using a vocabulary that practically sounded like a foreign language—qubits, Majorana fermions, decoherence, etc.—the company described its efforts to turn mathematical theory into practical reality via a chip that can perform quantum calculations, a steampunk-looking computing device that operates at near absolute zero (the extreme cold is currently necessary to manipulate qubits), and even a programming language built into Microsoft’s Visual Basic programming environment that can create algorithms designed for quantum computing applications.
All told, it was an extremely impressive, though still confusing, discussion of where the next several decades of computing will likely be focused. To make it a bit more practical, the company even announced the ability to create quantum computing algorithms that, for now at least, can be simulated on the Azure cloud computing platform.
While Microsoft never made any comparisons between the low-cost Windows 10 S notebooks and quantum computing announcements, as an event attendee, you couldn’t help but notice how stark the difference was between them. Some might argue that the range was a bit too wide, but it reflects the breadth of Satya Nadella’s vision for Microsoft, and how the company has extended its idea of computing across an enormously broad spectrum of possibilities.
This past week, there was lots of coverage of perceived demand for the iPhone 8 models in the media. Shorter lines at retail stores were seen as evidence of poor sales, and there was the usual handwringing about what it might mean for Apple. To put all this in context, it’s helpful to look at Apple’s financial guidance for the September quarter and see what that tells us about what Apple was expecting by way of early iPhone sales, and whether it’s still on track.
Last week, Apple gave me a Space Grey Aluminium Apple Watch Series 3 to test. I have been wearing an Apple Watch since it first came out and it has become an essential part of my device portfolio. I have already admitted to being a skeptic when it comes to wearables with cellular but a skeptic with an open mind willing to be proven wrong about the need to have a connected smartwatch. So I was much more interested in what Watch OS 4 had to offer to be honest and features such as the new heart monitor than cellular.
While I had been running iOS11 Beta since it first came out, I had decided not to run Watch OS 4 Beta so the first time I experienced it was when I turned on the new Apple Watch Series 3. Aside from cosmetic embellishments to the UI and faces, the most noticeable improvements for me have been on activity and fitness. The higher performance heart monitor and the added data on recovery really help you bring your workout to the next level even if, like me, you are just trying to get fitter. It has also been interesting to have Siri answer back to you rather than just displaying the answer to my question. I thought this might be a hook for me and while I need to spend more time with Apple Watch, it certainly has the potential to make me turn to Siri more.
The Role of the Carriers
As I listened to Apple Watch Series 3 being announced at the Steve Jobs’ Theater, I said that it was going to be down to the carriers to mess up this opportunity. I was referring to the pricing they would choose to charge for activating a watch which turned out to be about $10 a month with some limited promotions. A price, that in my view is excessive considering what the device can do, which is much less than what an equally priced phone or a tablet can do.
Little did I know, that the actual experience of setting up the Apple Watch could also be a bit of a hot mess mostly because the store, as well as online staff, was not properly informed. After a few hours and a few self-taught sales assistants, I could activate my Apple Watch, but in the process, I learned a few things.
My carrier still thought I was using an iPhone 5 as, apparently, the data on what phone I am using does not get updated automatically when a new phone with my phone number gets connected to the network. When I asked the salesperson, he said that they do not have that information and I should be calling in my IMEI number every time I update my device – he could not quite understand why I chuckled when he said that! I am sure it will not surprise you to hear that in the UK my mobile operator knew what phone I was using and that data was actively used to pitch upgrades and services.
It also turned out that my SIM was an old one, which did not support WIFI calling, a feature you need to activate to get NumberSynch working. Once again, I was surprised, as in the UK my carrier sent out a free SIM every time they upgraded them.
All these steps were necessary to start the activation, but they are totally unrelated to the Apple Watch and just show very poor customer management on the carrier part.
Others reported glitches in their activation process and I am not sure if it was because carrier underestimated the amount of demand or because carriers were just not ready. Either way, the customers are feeling the pain and Apple will likely be criticized for it. The complexity of being almost first (activations of LTE smartwatches thus far have been quite limited) and doing things slightly differently from others by relying more on the sinergy between Apple Watch and iPhone, however, had left Apple trusting that carriers would be ready. In some ways, this reminds of Apple Pay roll out when banks were heavily advertizing their support but you called to activate your card and they would have no clue. Some suggested that Apple should have waited, that the product was rushed, but I do not think that was the case. No matter when the shipping would have started these issues with setup would have likely occurred. As it was the case with Series 1, I think Apple is still in learning mode with Apple Watch and in this case how consumers will use the cellular connection.
Setting Expectations Right
Once I could set up LTE on my Apple Watch 3, I was very impressed by how smart the LTE connection is. When you look at your control panel, you can see when the Apple Watch is connected to the phone and therefore cellular is off. You can tell because you have a green phone icon and the cellular icon is white. When your phone is off or not connected to your Apple Watch because it is out of range, the cellular icon turns green and rather than the iPhone icon you will have the network bars, well dots in this case. This allows for Apple Watch to optimize battery life.
Apple made it clear that while the Apple Watch can now be a stand-alone device, it is not meant to be that way all day long. I had no problems going for a dog walk or through a workout without my phone and being able to make a couple of test calls and receive messages and notifications. Battery life when I did that ended up being a little shorter than when I did not use cellular, as you would expect, but I still made it through the day.
Apple’s great demo of the employee who was paddle boarding led people to expect miracles. While Apple’s decision to favor Wi-FI made sense, as it would help with battery life but also with data consumption, it is proving difficult in an urban context where you can find many public networks that require a password. Aside from this widely-reported Wi-Fi issue, which I am sure Apple will address shortly with a software update, I think that, because of that demo, people now expect to have reception in locations where not even a phone would get a signal. Let’s be realistic, despite the innovative design Apple used for the antenna, there are going to be limitations on what Apple Watch vs. an iPhone can do.
Should you buy the Apple Watch Series 3 LTE
I won’t tell you, whether you should be buying an Apple Watch Series 3 with LTE. Not because I cannot tell you but because I should not. My experience is mine alone, and more so than any other device you might have with you, Apple Watch is a very personal experience, and you should base your purchase on what you think your key use cases will be and mindful of the limitations the Apple Watch might have as an iPhone replacement simply because that is not what it’s purpose is.
This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the reviews of Apple’s newly lauched products, analyzing Google’s announced acquisition of people from HTC, chatting about the Nest security product announcements, and debating the opportunity for Amazon-branded smart glasses.
There are certain apps that we use every day that are just fantastic and that I think we take a bit for granted. Google Maps is one of those. It is amazingly useful, works exceedingly well, and just keeps getting better. And, rather quietly, helpful new features are introduced without a lot of fanfare. For example, I’ve noticed that parking information is now integrated into directions in certain cities.
One of the more interesting meetings I had recently while in San Francisco for Mobile World Congress Americas was at the headquarters of Mapbox, a VC-funded company that crafts beautiful maps and provides a location data platform, APIs, and SDKs for developers to build into applications. For example, they provide the weather layers that we’ve all been seeing too much of on the Weather Channel over the past few hurricane-filled weeks. The market for digital mapping services is active and very competitive (even though Google is the behemoth). And there’s been huge growth of mapping APIs over the past couple of years.
This dip into the digital mapping world got me to thinking about a few new features that would be very useful for mapping apps.
Greater Delineation of Road Surface Type. Especially Dirt Roads. Many people use mapping services to plan out bike routes. I’ve found that in many more rural locations, it is not clear whether a road is paved or dirt. This can spell trouble for bikers, especially if it’s a ‘road bike’ with thinner tires. Plus, it’s hard to determine road surface type using ‘satellite view’. Some of the services do a good job of delineating surface type for trails, such as running or bike paths, but, curiously, not as well for roads.
Offer A ‘Best Route’. For driving directions, there’s usually an option to ‘avoid highways’. And the mapping services have been steadily incorporating more information about roads that are ‘cycle friendly’ (e. have bike lanes). How about a ‘pleasant’ or ‘scenic route’ option, which would guide the user to more interesting, less traffic-y roads? For pedestrians, this might mean using side roads between two spots that are more enjoyable or interesting for walking. In cities, I could see some cool AI applications, for example for those who like art, designing a walking route that goes by galleries.
Incorporate Car Pool/HOV Lane Into Traffic Info. This idea came about as a result of a recent experience. It was a Friday afternoon, and I was on 101 South in the San Jose area. Lucky me. The map said it would take 45 minutes to get to my destination (12 miles away). So happens, there were two of us in the car, so we were able to use the car pool lane and get to our destination in 20 minutes instead. The time did adjust once we were in the lane, whizzing by other cars.Incorporating HOV lane information into mapping apps would be very useful. When requiring directions to a location, there could be a setting where the user is asked if there are two or more passengers. Then the app could also help plan the optimal route using HOV lanes, if applicable, and also adjust the time to destination. Or, in the route option, show ‘use HOV’. Even letting the user know that “you saved 10 minutes by carpooling” would be a helpful incentive.
Greater UI Consistency Between Web and Mobile App. This might be more specific to Google Maps, but I find that there are some major usability differences between Google Maps on the web and on smartphones. For example, it is easier on the Web to obtain directions between two places. And I think the “search nearby” function works more quickly, intuitively, and effectively on the Web. On my PC, if I enter an address, there’s a nearby button with prompts for restaurants, hotels, and so on, or I can enter something in the search box, such as ‘hardware stores’. But the “explore nearby” function, or trying to find out what’s near a particular spot, is much less intuitive or user-friendly on a phone. Funny thing is, it used to be better and easier. There have actually been petitions to bring back “search nearby” function which was on what is now referred to as the “classic’ version of Google Maps.
Better Tutorials & Help Information. There are many very useful features and settings in Google Maps and other digital mapping applications that I don’t think are well known, or are under-utilized by the average consumer. A search for “how to” usually yields helpful results, but many people don’t have a good idea of what to even look for. I think the digital mapping companies, or even third parties — Google, Apple, and third parties — could develop an improved set of visual tutorials on how to maximize the wonderful capabilities available in their services.
Earlier in the week, the FT published a story reporting that Amazon was allegedly working on a set of glasses using bone conducting technology to bring Alexa to your ears. Given Amazon’s hiring back in 2014 of Babak Parviz- founder of Google Glass – the Amazon’s eyewear should not come as a surprise. According to the FT, the glasses will have no screen, will work tethered to a smartphone and will look like a traditional set of glasses.
As iOS 11 rolls out, and a flood of augmented reality apps hit the market, the category faces a risk of a letdown. Apple is already featuring AR apps in a dedicated section of the iOS App Store and in the process hoping to educate consumers of the benefits. As we have observed before when Apple releases a new technology that developers can take advantage of and in return, Apple may feature their app; these developers flood the market with ideas. Some of these ideas are good, and some are not so good. It is this flood of applications that, while necessary, could lead to some technological letdown.
Nest this week make its biggest ever announcements and introduced its first truly new hardware category since it acquired Dropcam in 2014. Its product line is now extensive and with partner devices thrown into the mix covers much of the addressable smart home market, with all of its own products overhauled in some fashion in the past year. But one thing remains stubbornly unchanged at Nest: its business model. And that may now be the biggest thing holding it back.
An Explosion in Activity After Years of Minor Change
The past year has seen an explosion in activity from Nest after years of relatively minor change and incremental updates. The picture below illustrates what’s happened to Nest’s portfolio of products since its inception, and it’s clear how different 2017 has been:
From its inception in 2011 with the original smart thermostat through the acquisition by Google and acquisition of Dropcam shortly thereafter, Nest created or acquired three main product lines, and from 2014 to 2016 that didn’t change. The Dropcam products adopted Nest branding and got some updates and new variants, but there was no dramatic change. Then, in 2017, the cameras got big updates with much smarter technology, Nest introduced its first cheaper (and less obtrusive) thermostat, and this week it announced its first doorbell product and a home security system. All of this came after the ouster of founder Tony Fadell and although that’s likely in part a coincidence, it’s notable how much more quickly the company has appeared to be moving in the past year.
But the Business Model Remains the Same
However, for all the new products announced over the past year, Nest’s business model remains the same: this is fundamentally an off-the-shelf, pay-upfront, do-it-yourself model, the same as it’s been since the beginning. And as I’ve argued before, that model has severe limitations in terms of its addressable market. Just consider the prices of Nest’s top of the line products:
Nest Cam IQ outdoor: $349
Nest Cam IQ indoor: $299
Nest Secure: starts at $499
Nest Smart Thermostat: $249.
Several of these products will need to come in multiples to be useful, so those prices should likely be multiplied to get a real sense of what they’ll cost. That alone will make them cost prohibitive for many customers, but add in the intimidation factor of fiddling with thermostat wiring and the attendant risk of electric shock, drilling through walls to install a security camera, or trying to troubleshoot devices that won’t maintain a reliable connection to WiFi, and you further limit the addressable market.
The (DIY) Smart Home is Stuck
That’s why I’ve been arguing for quite some time now that the biggest thing the smart home market needs to go mainstream is a service model in which professionals install and manage the system and charge a monthly fee which recoups the cost of the hardware rather than charging for it upfront. That lowers the price barrier to entry considerably and means that those not willing or able to install or manage devices themselves can still participate in the smart home.
When new CEO Marwan Fawaz took over from Fadell, I posited that his background at Motorola and other companies which worked through carriers to support a service model might mean that we’d see more of this kind of thing from Nest going forward. But although Nest devices are included in some third party smart home services, Nest still hasn’t created its own, in contrast to players like Comcast, Vivint, AT&T, or the alarm companies. Indeed, at this week’s event it used the self-install model as a major selling point in contrast to having to wait around for a technician to come and spend hours installing a system. That may well be a selling point for the minority who feel comfortable with that model, but I worry that Nest is shutting itself off from a much larger addressable market by restricting itself to it.
A Foundation is in Place for a Managed Service
Nest already has a foundation in place for a services model, as it offers the Nest Aware service for monitoring cameras, and now has a partnership with Moni for 24/7 monitoring around its security system. It’s building a subscription model, but it’s entirely based on either third parties or automated systems today rather than incorporating the human touch in installation and management. Nest even offers to help you find an installer for your new Nest products, but that’s still an arm’s length relationship today and doesn’t offer the brand guarantee that could come from a truly integrated service.
To be sure, there’s still likely quite a bit of growth available for Nest in its current model, by expanding it to new markets and now expanding the line of products it sells. For now, that may be enough to sustain its business for the next couple of years, but there’s a much bigger opportunity out there if it takes many of the components it already has in place and turns them into a managed service.
If we take a step back and look at advertising during the print and TV golden age, we notice that companies used their advertising strategies in these mediums to not just sell products but to also build a brand. That still remains the case today, but it seems largely big established brands use these mediums to re-enforce thier brand as well as promote products. But as more advertising dollars shift to online, and in many ways advertising online is cheaper than offline mechanisms, it is fascinating to watch how brand upstarts are using new techniques in the digital age to build their brand and promote their products.
At least not in the derogatory sense that many are using to label a phone that costs $1000.
I started a conversation on Twitter last week trying to separate what is expensive and what is a luxury. And as the comments continued, I realized that explaining the nuances of what luxury means in tech would take longer than 140 characters so here I am. Please don’t think I am neglecting to understand the privileged position from which I am discussing what a luxury is and what it is not. The focus here is on establishing what the true value of the iPhone X is. What I am not discussing is the much broader and critical impact that the higher cost of technology has on society.
Expensive and luxury are very much intertwined, and they are labels that change slightly depending on what item you are referring to. If you look up the definition of luxury in the Webster dictionary you find that Luxury is:
something adding to pleasure or comfort but not absolutely necessary
an indulgence in something that provides pleasure, satisfaction, or ease
When you look up expensive you find:
involving high cost or sacrifice
commanding a high price and especially one that is not based on intrinsic worth or is beyond a prospective buyer’s means
characterized by high prices
I look at these definitions, and I seem to be doing a good job at gathering evidence against my point. After all, when I think of the iPhone X I do believe it is adding to my pleasure, and it is not necessary – the iPhone 8/8Plus could do the trick. Well, my current iPhone 7Plus does a darn good job at being a smartphone. The iPhone X is also characterized by a high-price, and it is beyond many buyers’ means fitting both the luxury and expensive definition.
Luxury Phones are Mostly Bling
When I think of luxury phone there is one brand that comes to mind first: Vertu. Vertu had a somewhat troubled life that ended this past July when the current owner, Turkish businessman Murat Hakan shut it down after failing to pay creditors. Vertu opened in 1998 as part of the Finnish phone maker Nokia. At that point, the phones were running on Symbian and were handmade with luxury materials from gold to rubber from F1 tires. Starting price: $5,000. Vertu was sold in 2012 to private equity company EQT when the phones started to run Android and were still hand-made in the UK. In 2015, the company was sold to Chinese company Godin Holdings and finally to Mr. Hazan in 2016.
In its glory days, Vertu was the mother of all luxury phones not only it was hand-made like an haute-couture dress and used the most expensive metals and materials, but it also came with a concierge service that will help you do whatever you needed to do from booking a taxi to shopping online.
In a less extreme sense, luxury phones have been about designer brands and bling. A quick search brings up a top ten charts with names from the fashion and car industry or unknown brands that took mainstream phones and covered them in gems.
So what happens when the Price goes up cause the Tech is better?
None of the phones you see associated with a luxury tag brings cutting-edge technology to the plate. Their price is merely defined by the materials used and the power of the brand name on them. And this very point is why I do not think the iPhone X deserves to be lumped into the luxury phone bucket.
Now, I would not go to the extent of saying that the iPhone X has a “value price” like Apple CEO Tim Cook did on Good Morning America. But I do agree with his underlying point which is that the iPhone X has a lot of tech packed into it.
Let’s pretend there was no iPhone X and that the iPhone 8Plus was the flagship product. Although starting at $799, $50 more than the launch price of last year’s iPhone 7Plus, nobody, as far as I am aware, called it a luxury phone. For some reason, there is something about getting to the $1000 price point that gets people to think differently. But let’s compare the features and see what the iPhone X has over the iPhone 8 Plus:
8-inch OLED Super Retina HD display
depth sensor that powers Face ID and supports Portrait Mode and Portrait Lighting for selfies and Animojis
Dual optical image stabilization
If we are ok with $799 for the iPhone 8Plus and we add all this technology do we honestly think that the price should not increase? Some people argue that this is all Apple tax, but while of course, the Apple brand commands a premium it does so across devices. This means that the Apple premium equally impacts other iPhone models too.
Is a $1000 too much for a Phone?
A genuine question to ask is whether a $1000 for a phone is just too much even when that phone is an iPhone, and the answer is once again not a straightforward one. Not so much because most consumers don’t pay $1000 straight up but because the value they get from a phone as well as the tolerance they have for tech is different from user to user.
The return of investment that most people get from their smartphone is way bigger than what they ever got from a PC (outside of work), and this is more so with iPhones. There is also a much stronger emotional bond with a phone than any other gadget we own. Lastly, software updates delivered to these phones lengthen their life although the draw of the latest upgrade will try and make what you own feel inadequate.
So who is the iPhone X for? If you want the best product there is in the lineup – not just the most expensive, but the best tech – then the iPhone X is for you. If you want to indulge in tech that is adding pleasure but that is not necessary the iPhone X is also for you. But if you see smartphones as a utility device or are overwhelmed by how much technology these little rectangles have packed in then you better look elsewhere.
I have often been called a Silicon Valley apologist, and I never deny that I am on the side of history that sees our area as being one of the most technologically creative areas of the world that have developed life, work, and changing educational products during the last 75 years.
I was born in San Jose and have witnessed it going from a sleepy fruit orchard when I was born in the early 50’s to a world-class tech center where hundred’s of companies are developing innovative technologies that they hope will be world changing. In most cases, the technology is used for good. But the recent developments coming from Facebook taking ad dollars from hate groups and fake Russian sites to Google’s business model that wants as many personal data from us to bomb us with adds and you can see why Silicon Valley is getting a lot of negative attention these days. Add to that the massive security breaches enabled by flawed software and it has cast a real pall over Silicon Valley lately.
While I am very bullish on Silicon Valley, I know that its past and present has many blemishes to deal with from our help in creating weapons of mass destruction to our current position of allowing social media to run amok. Because of lack of real innovation and outdated rules in tracking and blocking fake social sites from impacting everything from elections to bolstering hate groups, Facebook, Google and even Twitter are now the target of major governmental scrutiny around the world.
Silicon Valley is also coming under attack over corporate tax issues, and offshore holdings and in more and more articles I see Silicon Valley being painted as villains instead of the creative innovators who drive much of our tech breakthroughs as well as a significant part of the US and world economy.
I have been talking to some industry pioneers who, like myself, have been in Silicon Valley for decades and they are gravely concerned about the tone and attitude the outside press and social media are saying about the Valley and tech in general. And the recent disclosures of sexual harassment within the VC community and various tech companies as well as questions about diversity and Silicon Valley is coming under pressures they have never had to deal with in the past.
“From Facebook’s advertising and fake-news issues to Google’s pay practices and antitrust woes, Silicon Valley’s biggest tech companies are feeling the heat lately. The left, the right and those in between are slamming the tech giants, leading to headlines such as “Conservatives, liberals unite against Silicon Valley” and “There’s Blood in the Water in Silicon Valley.”
It goes on to quote Trever Potter, president of the Campaign Legal Center and a former chairman of the Federal Election committee who said in a letter he wrote to Mark Zuckerberg-
““[B]y hosting these secretly-sponsored Russian political ads, Facebook appears to have been used as an accomplice in a foreign government’s effort to undermine democratic self-governance in the United States,” Potter wrote, according to Yahoo News. “Therefore, we ask you, as the head of a company that has used its platform to promote democratic engagement, to be transparent about how foreign actors used that same platform to undermine our democracy.”
Then later in the week, ProPublica wrote a story saying ” Facebook’s self-serve ad platform was allowing advertisements that targeted groups such as “Jew haters.”
For a lot of us who have been part of Silicon Valley’s past and present, these recent developments are more than concerning. For decades our work and the work of Silicon Valley pioneers and its tech workers have toiled long hours to bring ground breaking technology to the world that has changed the lives and businesses in hundreds of ways. And for the most part, that change has been for good . Until recently even social media has had a mostly positive influence on people and even helped bring about the Arab Spring revolution a few years back and was instrumental in helping people rescue others during the recent hurricanes in Texas and Florida.
But now Russia and other trolls have learned how to game the systems and play into the overall profit motives of the Facebooks and Googles of the world who operate with minimal oversight lest it impacts their profits.
All of these things combined has painted a very different picture of Silicon Valley to many in the US and around the world. While I am troubled by the Valley’s image taking a major blow, I am more concerned that too often our engineering driven world has created products just to create them without really understanding the ramifications of what they have made. In many cases, I do not see any ethics checks, or even thought about the long term impact their technology may have when they start the creative process.
This is especially the case with Facebook, Google, and even Twitter. While all have primary virtues, they have all evolved to include serious flaws when it comes to privacy, allowing fraudulent accounts, etc. and have not come up with either the proper technology advances to control these problems or developed policies that keep them from happening in the first place. The Fake News issues in-itself is a Pandora’s box with world-changing ramifications.
This type of scrutiny could bring about more governmental oversight and perhaps these data behemoths might even come under anti-trust regulations if they don’t find a way to keep their sites in check and become more responsible for what is being posted or how the data is collected for their business gains.
Given what I see being written about the Valley lately and how governmental and social leaders are targeting many tech companies, I fear that a day of reckoning is about to be upon us. Most of the big tech companies are going to be challenged in ways they are not prepared for given the intense pressure that is building up in Washington, the EU and other parts of the world.
I don’t know how tech will find a way to get back into the graces of the public. But if they don’t find a way to self-regulate themselves and be more ethical in the way they run their businesses, I suspect we will see much more in the wave of governmental and regulatory oversight that we hoped would never happen to tech companies in general and especially the world-changing ones here in Silicon Valley.
One of the most appealing aspects of many tech-based products is their ability to be improved after they’ve been purchased. Whether it’s adding new features, making existing functions work better, or even just fixing the inevitable bugs or other glitches that often occur in today’s advanced digital devices, the idea of upgrades is generally very appealing.
With some tech-based products, you can add new hardware—such as plugging a new graphics card into a desktop PC—to update a device. Most upgrades, however, are software-based. Given the software-centric nature of everything from modern cars to smart speakers to, of course, smartphones and other common computing devices, this is by far the most common type of enhancement that our digital gadgets receive.
The range of software upgrades made for devices varies tremendously—from very subtle tweaks that are essentially invisible to most users, through dramatic feature enhancements that enable capabilities that weren’t there before the upgrade. In most cases, however, you don’t see entire new hardware functions being made available through software upgrades. I’m starting to wonder, however, if that concept is going to change.
The event that triggered my thought process was Tesla’s recent decision to temporarily enhance the battery capacity, and therefore driving range, of their Tesla vehicles for owners in Florida who were trying to escape the impact of the recent Hurricane Irma. Now, Tesla has offered software-based hardware upgrades—not only to increase driving range but to turn on their autonomous driving features—for several years.
Nevertheless, it’s not widely known that several differently priced models of Tesla’s cars are identical from a hardware perspective, but differ only in the software loaded into the car. Want the S75 or the S60? There’s an $8,500 price and 41-mile range difference between the two, but the only actual change is nothing more than a software enablement of batteries that exist in both models. Similarly, the company’s AutoPilot feature is $2,500 on a new car, but can be enabled via an over-the-air software update on most other Tesla cars for $3,000 after the purchase.
In the case of the Florida customers, Tesla was clearly trying to do a good thing (though I’m sure many were frustrated that the feature was remotely taken away almost as quickly as it had been remotely enabled), but the practice of software-based hardware upgrades certainly raises some questions. On the one hand, it’s arguably nice to have the ability to “add” these hardware features after the fact (even with the post-purchase $500 fee above what it would have cost “built-in” to a new car), but there is something that doesn’t seem right about intentionally disabling capabilities that are already there.
Clearly, Tesla’s policies haven’t exactly held back enthusiasm for many of their cars, but I do wonder if we’re going to start seeing other companies take a similar approach on less expensive devices as a new way to drive profits.
In the semiconductor industry, the process of “binning”—in which chips of the same design are separated into different “bins” based on their performance and thermal characteristics, and then marketed as having different minimum performance requirements—has been going on for decades. In the case of chips, however, there isn’t a way to upgrade them—except perhaps with overclocking, where you try to run a chip faster than what its minimum stated frequency is—and there’s no guarantee it will work. The nature of the semiconductor manufacturing process simply creates these different thermal and frequency ranges, and vendors have intelligently figured out a way to create different models based on the variations that occur.
In other product categories, however, I wouldn’t be surprised if we start to see more of these software-based hardware upgrades. The benefits of building one hardware platform and then differentiating solely based on software can make economic sense for products that are made in very large quantities. The ability to source identical parts and develop manufacturing processes around a single design can translate into savings for some vendors, even if the component costs are a bit higher than they might otherwise be with a variety of different configurations or designs.
The truth is, it is notoriously challenging for tech hardware businesses to make much money. With few exceptions, the profit margin percentages for tech hardware is in the low single digits, and many companies actually lose money on hardware sales. Most hope to make it up via accessories or other services. As a result, there’s more willingness to experiment with business models, particularly as we see the lifespans for different generations of products continue to shrink.
Ironically, though, after years of charging for software upgrades, we’ve seen most companies start to offer their software upgrades for free. As a result, I think there’s more reticence for consumers and other end users to pay for traditional software-only upgrades. In the case of these software-enabled hardware upgrades, however, we could start to see the pendulum swing back the other way as virtually all of these upgrades have a price associated with them. In the case of Tesla cars, in fact, it’s a very large cost. Some have argued that this is because Tesla sees itself as more of a software company than a hardware one, but I think that’s a difficult concept for many to accept. Plus, for many traditional hardware companies who may want to try this model, the positioning could be even more difficult.
Despite these concerns, I have a feeling that the software-based hardware upgrade is an approach we’re going to see a number of companies try variations on for several years to come. There’s no question that it will continue to come with a reasonable share of controversies (and risks—if the software upgrades become publicly available via frustrated hackers), but I think it’s something we’re going to have to get used to—like it or not.
In a post a few weeks ago, I talked about the growing body of data suggesting product segments most susceptible to a form of disruption theory known as low-end disruption. Through a series of recent conversations I’ve had with some investors, business school teachers, and “thought-leaders” it became clear to me many supposed smart minds still fall into a dangerous trap. I’m calling this trap commodity thinking, and I’m doing so for a few reasons.
Commodity thinkers tend to believe everything is destined to become commoditized. This common way of thinking of many who look at and analyze consumer electronics industries leads them to the assumption that if a product falls into the consumer electronics category, then it will become commoditized. Often examples like the Sony Walkman, or TVs, or hand held game pads, are used as examples of the inevitability that things like smartphones and other types of computers will always be a battle of prices.
I’m not saying the price is not important, what I am arguing is the price is not the most important factor to consumers. We looked to quantify this, to a degree, in several studies we did last year in both our pre-holiday buying study and our post holiday purchase study. We explored products like wearables/smart watches, smartphones, TVs, PCs, and tablets. In each category, we never found more than 20% of interested buyers or of those who did purchase a product in the category who were influenced by the price of the product as their major purchase driver. What was more interesting, was how devices that were more general purpose like PCs/, or tablets like iPad, or Apple Watch, and smartphones, seemed to be even less price driven than devices like TVs or Fitness bands which were more specific purpose type products.
We have enough data on this subject for me to hypothesize further that specific purpose products tend to be the ones that are more susceptible to commoditization. The commonality of things that tend to not be as price sensitive tends to be things we consider computers, all of which are more general purpose in nature and can do more than one thing, many things in reality, for their owner. Perhaps the perceived value goes up because a product has many dimensions instead of just one. Whatever the case, we now have more than enough consumer data from companies themselves, retail/channel experts, and quantitative data to understand beyond a shadow of a doubt that certain products and product categories are under no threat of commoditization.
The other reason I’m calling this commodity thinking is that the act of falling into the trap of believing all consumer electronics products become commoditized is itself commoditized. Meaning, it seems this is the prevalent thinking my the majority. The reason for this, to put it simply, is it’s easy. This way of thinking is easier to wrap a spreadsheet around, or a template for building your model. It is, however, deeply flawed and void of understanding of the deeply nuanced consumer mindset. A favorite story of mine was when a hedge fund manager at a large investment firm said to me “why would anyone buy an $700 iPhone when a $300 smartphone will do just fine?” He happened to be wearing an Armani suit, a $10,000 Rolex and Prada shoes. Hopefully, you see the irony in this scenario. He can’t see why someone would buy a more expensive product when a cheaper one is just as good, yet he spent absurd amounts of money on clothes and accessories when much less expensive alternatives would do just fine.
Commodity thinking is pervasive, and it is dangerous from a business perspective. Understanding the customers for a product or service, what the pain points are, and where their value propositions lie, are key parts of establishing a price elasticity strategy. In a world where more and more technology is getting smarter, more useful, and more valuable, there is a good chance commodity thinking is on its way out entirely in the tech industry.
With Apple’s announcement last week of its new 4K Apple TV, it reinforced its positioning in the market, which remains remarkably distinct from the other three major players competing for US buyers. So far, that strategy has seen it take fourth place in market share, something that seems unlikely to change going forward. Why is that? And does it matter?
The iPhone X rightly garnered most of the world’s attention from Apple’s launch event this week, but the company’s announcement of a new Apple Watch Series 3 with LTE and new Watch OS 4 updates excited many of us that closely watch the wearables market. The new $400 product may not significantly change the trajectory of Apple’s near-term smartwatch growth, but several of the technical features its contains are substantial. It demonstrates Apple’s technical prowess, and some of these additions have the potential to reverberate through the tech industry and adjacent product categories.
More Tech, Same Form Factor As Apple and other tech firms such as Samsung continue to push the boundaries of miniaturization, it is easy to take products that appear to be iterative in nature for granted. But the amount of next-generation technology that Apple crammed into the Apple Watch Series 3, expanding the form factor by a scant 0.25 MM, is quite impressive. In addition to the full LTE and UMTS cellular radio, Apple has also added a new dual-core S3 processor it claims is 70 percent faster than its predecessor, and new wireless chip (W2) that offers notably faster WiFi and Bluetooth connectivity. The Watch smartly switches from Bluetooth to cellular when you separate it from the phone and switches back when you come back into range. Apple also added a barometric altimeter that measures relative elevation and moved the device’s storage to 16GB (the non-cellular watch still has 8GB).
New Hardware, New OS As always, Apple is launching the new hardware with a brand new operating system. Watch OS 4 has a long list of new features, but two of the most interesting to me and other health and fitness-focused users include an updated heart-rate tracking feature and new activity options. Going forward, the Apple Watch will monitor your heart rate all day long, instead of just when you start a workout. By capturing heart rate across a range of activities, from resting to walking to running and more, the watch can over time build a more accurate view of your fitness and health. Once the watch establishes your baseline, it can provide more precise fitness targets during workouts, and can help you understand how your body recovers from workouts. It can also monitor you for issues during the day. So if your heart rate is acting abnormally during rest, the watch will alert you.
Watch OS 4 also continues Apple’s tradition of bringing additional fitness options to the hardware. Among the most interesting is the new High-Intensity Interval Training workout option. During these types of exercise sessions, the users participate in different physical activities to increase and then decrease their heart rate. The current watch has no facility for capturing this increasingly popular form of exercise. The new OS also brings to market a feature Apple announced at WWDC which will let the watch talk to future fitness machines enabled with the new GymKit.
Leaving the Phone Behind Wearable skeptics have long suggested that adding LTE will do little but decrease a device’s battery life. And frankly, anyone who thinks consumers in mass will drop their smartphones for a LTE-connected wearable are missing the point. The fact is, there are many times when leaving the house without the phone would be highly desirable. And Apple smartly made a point of enabling the ability to stream Apple Music from the watch right out of the box. That means you can connect your watch to a set of wireless AirPods and listen to tunes without the phone, too. The non-phone use cases are admittedly limited, but I’m interested in the idea that an always-connected Apple Watch might allow me and others to partially—but not entirely—disconnect from the world for parts of the day. With the phone left behind, the compulsion many of us have to constantly check email and news may diminish. But if an emergency text or call comes through, you’ll still receive it. The idea of reclaiming parts of my day, and being more present and just slightly less connected, sounds quite appealing to me.
eSim’s Big Moment Finally, what may well end up being Apple’s most impressive technical feat: The purported seamless ability to add the watch to an existing carrier data contract and phone number. One of the biggest areas of friction for adding LTE to new devices is the fact that it typically involves a physical SIM card and a sometimes frustrating interaction with the carrier. The plan for Apple Watch is to utilize an eSIM that will negate the need for receiving a physical SIM in the mail or at a Telco-provider location. There has been some pushback on pricing, as it looks like US carriers will charge customers $10 per month. This is the same pricing as adding an LTE tablet today, and at some point, the carriers need to stop asking customers to repeatedly pay extra to access the data they’ve already purchased. But what’s more important here is this: While the Apple Watch isn’t the first product to support eSIM, it may prove to be the most successful one to date. If this turns out to be as easy to do as Apple promises, it will open up the possibility of consumers embracing this technology going forward. That could mean an easier ramp for future cellular-connected products. We know that ARM-based, LTE-focused Windows 10 systems should appear in the market early next year. And at some point in the future, Apple may decide to address the market demand for an LTE-enabled Macbook. If Apple has figured out how to make this process less painful, it may prove to be among the more notable achievements to come from this important product launch cycle.
With the launch of new iPhones this week, and specifically the creation of a fourth tier in the current generation lineup, I’ve been talking to people over the last couple of days about price quite a bit, and thinking about average selling prices for iPhones, and how Apple has been able to raise ASPs even as it’s extended the iPhone lineup down market, which is a pretty unusual feat.
A brief history of iPhone pricing
A good place to start this analysis is with a brief review of the last few years’ iPhone lineup and the price range implied by it, as well as the average selling prices that resulted from it. The diagram below shows how the iPhone lineup has expanded over the last few years from a single current model to four (ignoring the fact that older models have stuck around at discounted prices as well):
That expansion in the range largely reflects the maturation of the market – when markets are in their infancy, product lines can be simple because there’s plenty of addressable market to go around, but as they mature and become increasingly saturated, diversification in the product line becomes necessary to meet a wider range of use cases for increasingly sophisticated buyers. It was also clearly in part a response to competitive moves to offer larger phones in the case of the Plus line.
The chart below shows what’s happened to the price range and average selling prices as a result (in each case, the year refers to the year in which products were launched, with the ASP being the average price per shipment during the four quarters from launch). In this case, the lowest price available does include older phones sold at discounted prices, and I’ve included a pretty modest guesstimate for this year’s ASP, which could well be quite a bit higher depending on supply levels for the iPhone X.
The price range from the very cheapest iPhone being sold to the most expensive model has obviously expanded significantly, from just $200 in 2010 to $800 once the iPhone X launches, with the iPhone SE dropping to $350. That reflects the broadening range of iPhones available, as well as the increasingly large storage tiers the various lines offer.
Remarkably strong ASPs throughout
But to my mind the most interesting thing to look at is what’s happened to ASPs, because the pattern in Apple’s other three big product lines – Mac, iPod, and iPad – has been that as the product has matured and Apple has spread the lineup down market, ASPs have fallen, sometimes very significantly. Mac ASPs in the late 1990s were well over $2000, but have dropped by around half in 20 years even ignoring inflation during that period. iPods sold for an average of over $400 when they first launched, but dropped to an ASP of near $150 in the ensuing ten year period, while iPad ASPs have dropped from over $600 to closer to $400.
The iPhone is therefore a massive outlier even among Apple’s own product lines, with an ASP that has held constant or risen for much of the last ten years. Apple has changed the way it reports iPhone revenues since its inception, which makes it impossible to make true comparisons between early ASPs and those today, but as the chart above shows, ASPs have risen over the last seven years at least even as the lowest available price has fallen by $200. In other words, though Apple has reduced the lowest price of iPhones by nearly half since its inception, people are choosing to spend more and not less over time, even as the market for iPhones has expanded dramatically beyond the US and other mature and high income markets. In other words, that expansion in ASP has happened in spite of many sales of cheaper, older phones in less wealthy markets around the world. The big question is therefore why, and what’s different about the iPhone?
Marketing, business models, and the centrality of the smartphone
I’d argue there are three key reasons for the behavior of iPhone ASPs. The first is marketing, the second is business and sales models for the iPhone, and the third is the increasing centrality of iPhones in our lives. Starting with marketing, I don’t think we can overlook Apple’s successful marketing of new iPhones each year as big improvements over the last year’s phones and especially over those from two years before, given that the two-year upgrade cycle was for many years the default. Apple has convinced people to buy a top of the line iPhone roughly every two to three years very consistently, and the reality is that the price of the top of the line iPhone has risen significantly during that period without much evidence of price elasticity in mature markets at least. That’s a testament to Apple’s product management, positioning, and promotion of its new devices.
Secondly, the business and sales model for selling smartphones in many markets is a huge factor here too. There’s really no other major consumer electronics category where the default sales model in many markets is to pay for it in monthly installments rather than upfront. There are certainly financing models in other markets, but nowhere is the installment or leasing model as ubiquitous as with smartphones. That model has helped the iPhone enormously, as one of the more expensive smartphones on the market, because consumers are not exposed as directly to the price increases they’ve faced over the years. Without the wireless carriers and their subsidy and installment models, I’m very skeptical the iPhone would have seen the iPhone performance it has.
Thirdly, the willingness of consumers to pay more and more for their iPhones is a testament to the increasing centrality of smartphones to our lives and the role these devices now play for us. In many cases, they’re partial or complete replacements for personal computers, used for managing calendars, writing emails, online shopping, paying bills, and so on. Many of us run businesses in part or entirely from our smartphones. And so these devices have taken on enormous significance for us and we’re willing to pay more to get bigger screens, faster processors, and other features which reflect their centrality in our lives.
My bet is that we’ll see significant demand for the iPhone X this year, and if Apple is able to meet that demand with significant supply later in 2017 and into 2018, ASPs could again rise quite significantly. That would be a testament to these same three factors: great marketing by Apple, the enabling business and sales models for smartphones, and a recognition by consumers that these are some of the most important devices in their lives.
I’m not the only one, but there aren’t many folks out there who have been pounding the Apple Silicon strategy drum. There are many fascinating elements strategically to these efforts that many people, companies, Apple competitors, etc., take for granted. I’ve argued before that the Apple silicon efforts are one of the core legs of the stool that help them differentiate and separate their products from the herd. If any supplier component supplier in semiconductors or sensors can not meet their needs or deliver on their vision, they simply design it themselves. While I want to dig into the A11 Bionic processor itself and the key parts of the new architecture that are relevant, let’s look at the list of components Apple now designs themselves.
Rumors have been swirling for some time that Apple was about to add a cellular modem to their popular Apple Watch line and indeed, at yesterday’s event, that is exactly what Apple did. By adding a modem, users will be untethered from their iPhone’s and able to make and receive calls, get notifications, alerts, messages and various types of data that would be of interest even if the iPhone is back at home or at the office. For those who want to go for a run or a long walk, the Apple Watch Series 3 with the LTE modem allows this watch to stand on its own and gives people a new level of freedom from their smartphones yet still be connected if needed.
With a rumor mill that feels like it started the day after the iPhone 7 launched, we finally know everything there is to know about the new iPhone line up. Three products: the iPhone 8, iPhone 8 Plus and iPhone X – pronounced ten.
Before we got to the most anticipated product of the year, however, Tim Cook spent time talking about the biggest product we saw for the first time: Apple Park. The homage to Steve Jobs was short, intense and from the heart. It also felt as if Tim Cook was turning a page into a new chapter. Acknowledging that today’s and, more importantly, tomorrow’s Apple remains true to what Jobs believed Apple should be but it is now a company standing on its own feet.
The iPhone X
There have been months of looking back at the past ten years of iPhone in a somewhat nostalgic way. It is when looking at the last ten years of technology, however, that you see how the iPhone X rests on the shoulders of the iPhones that came before. Touch, use of glass and metal, camera technology, retina display, Touch ID all contributed to getting to the iPhone X that was just introduced. Most importantly, the iPhone X would likely not exist without the steps Apple took in its chipset designs reaching a vertical integration that truly sets its products apart. Going forward, this focus on chips will prove key to Machine Learning and Artificial Intelligence two technologies that were mentioned a lot during the keynote.
The two key features that will drive iPhone X interest are FaceID and the new Super Retina Display. While users will have to learn new gestures to navigate the new bezeless screen, the learning curve does not seem steep. The screen is beautiful and drives instant gratification while the real benefit of FaceID will come with use. As always with Apple, it is not about a feature performing one task, it is about leveraging that feature in different ways. So FaceID unlocks your iPhone X, but the technology behind it also opens your notifications or turns off your alarm if the iPhone “sees” you looking at the screen. And of course, face recognition is also empowering the new Animojis, which will push messaging to an all new level! All these experiences point to Apple creating an even tighter connection between me, the user, and my iPhone, one where thanks to ML and AI things will happen just like magic.
iPhone X will ship on November 3 and the market expressed concern about this delay compared to the iPhone 8 and 8Plus shipping date. However, there is one thing in my mind that people are neglecting to consider. In all the key markets where iPhone X will matter, Apple has a strong retail presence which will allow Apple to be ready for the holidays shopping. Unlike most of its competitors, Apple does not entirely depend on shelf space allocated earlier in the Fall at the key retailers. Online and Apple Town Squares are a big part of Apple’s sales, and when it comes to iPhone X those channels would have likely been responsible for most of those sales anyway.
I am also not concerned about potential iPhone X buyers opting for an iPhone 8 rather than waiting. Of course, either way works for Apple. Pricing was smart here. The way that the iPhone 8 Plus, in particular, is priced almost makes the decision to go for iPhone X easier: an iPhone 8 Plus with 256GB is $949. Some users might compromise on storage if they have an iCloud account and are used to store everything there.
Overall the price of the new iPhone X is a non-issue in all those markets that have installment plans available. Consumers will see an increase in the low double digits on what they currently pay monthly.
The Strongest Lineup Yet
If you are still concerned about the price point of the iPhone X, the good news is that it is not the only option Apple has given potential buyers. Apple’s iPhone line up has a little bit for everybody from the iPhone SE at a very competitive $349 all the way to the new iPhone X via the iPhone 6s and 6sPlus, and iPhone 7 and 7Plus which are now $100 cheaper and iPhone 8 and 8 Plus.
The key here is that with the choice in price there is no choice to be made in experience. While it is true that some features are hardware dependent, the vast majority of what the Apple ecosystem has to offer covers all these devices in some way. Take AR for instance, while the iPhone X will be able to make use of the new depth sensing camera for AR, owners of all the iPhones mentioned above can experience AR through iOS11.
Some people are concerned that Apple created a new hero product that moves further away from the mainstream iPhone. Yet this is no different than what Apple has done with the iPad family and with the Mac family. Even more so with iPhone, Apple realizes that not all iPhone users want more technology and especially when it comes to some technology, not every user is ready for it. Yet, this should not stop Apple from continuing to innovate.
A few more things…
Watch Series 3
I am a self-declared non-believer when it comes to cellular in a smartwatch. The main reason is that I would never leave my phone behind when I leave my home. The second reason is that adding LTE has meant a compromise in size of the device and battery life. Today, Apple took reason number two away!
Apple also helped my decision by pricing the LTE Watch very aggressively. For $70 difference, I might just try and see if my umbilical cord to my iPhone can not just be extended, as it happens today with Watch Series 2, but can be cut. While I still do not see myself intentionally leaving my iPhone behind, I am quite happy to pay for the peace of mind in case I do unintentionally forget it.
Probably the biggest driver for me for connectivity is the ability to interact with Siri through voice alone. Thanks to LTE, in fact, Siri can now reply to you rather than showing you an answer on the screen
I found the video Apple used in the keynote showing a very wide range of people of all ages, genders, and nationalities to really drive home the point that Apple Watch delivers a different value to different people and you need to find what motivates you. I said this from the very beginning: Apple Watch will give back more if you initially invest time in figuring out what it can do for you.
Live Sports coming to Apple TV at the end of the year is a great addition to the segment that spends the most money on TV content
Interesting mention of Music on AppleTV which raised my curiosity on whether anything will be special about HomePod being able to pair with Apple TV.
An underestimated value of AppleTV is the role it plays as a hub for HomeKit. As consumers connect more and more devices in the home the value for this feature alone will deliver a good return on investment
The fact that AppleTV 4K will update your purchased non-4K content for free the moment it becomes available is a nice touch that might get some people to think first before they buy digital copies elsewhere
Where was Siri?
Siri was not mentioned a lot at the event other than when talking about Apple Watch 3. This does not mean that Apple feels different about it.
Siri will have her moment when HomePod hits the market later this year, and I am still convinced we will see an all new Siri
As you are reading this, I will be on my way to the Apple Fall event at Apple’s new campus, and more specifically the Steve Jobs theater. I was thinking about all the things Apple may announce at this event and all the ways I will slice and dice my analysis of the products. While much of my focus will be on the iPhone, Apple Watch, and anything else that may debut, the less talked about product that I plan on spending some time observing and thinking about is the Steve Jobs theatre itself.
Essentially, you could verify your identity by providing some kind of unique piece of information that—in theory, at least—only you or other trusted parties would know. Like, for instance, your social security number.
Of course, those days are now gone, and last week’s monumental hack of credit reporting firm Equifax put a thundering exclamation point onto the end of that era. Throw in all the other high-profile hacks into companies like Home Depot, Target, etc. and it’s not too far a stretch to say that not only the social security number, but a great deal of other identifying information on nearly anyone in the US is now readily available. (In fact, paradoxically, the value of that once very important information has likely dropped dramatically.)
Identity verification without being physically in front of someone is still an incredibly important way in which we interact with the world around us, however, so what do we do? The problem is that we don’t really have a clear, universal alternative moving forward.
Yes, there are numerous efforts designed to move away from the more traditional “analog” methods of identity to digital ones, but none of them work across all the environments or interactions in which we find ourselves engaging. Ironically, the notion of moving to very basic forms of digital identity—usernames and passwords—has actually exacerbated today’s identity problem, and by a huge amount.
Today’s digital identities are essentially a horrendous conflagration of good intentions gone wrong, because none of them is truly complete. Part of the reason is that, while moving towards a single digital identity—such as a government sponsored system—offers some clear benefits, it also opens up potential risks as a single, critical point of attack. Lose that one identity, and you could potentially lose everything.
Important steps forward are being taken, however. First, we’ve seen tremendous growth in the use of multi-factor authentication, where you need to provide at least two forms of digital ID to verify your identity. The problem with this is that not all methods of providing a second or third factor, or “form” of digital identity are equally strong, and several have been discovered to be much weaker than initially thought. Texting your temporary or special log-in codes via SMS, for example, has serious limitations that weren’t initially identified.
Second, we are seeing much more use of different types of biometric authentication, which uses physical characteristics of your body to identify you. From fingerprint readers on notebooks and smartphones, to iris scanning, and if rumors about Apple’s new iPhone are to be believed, facial recognition on smartphones, the availability of these generally much more secure methods of ID verification is becoming more widespread. Now, some worry that biometric data, as with a single universal ID, represents a security concern because you can’t “change” your biometric data and if it’s somehow stolen, you have a security challenge. However, biometric data in combination with the requirement for multiple factors of authentication (even, in some cases, multiple forms of biometric identification) is generally considered very secure.
Third, we’re starting to see more efforts to form industry-wide collaborations to help drive the “universality” of these identity concepts. The FIDO Alliance, for example, is working with a variety of major tech, credit card, banking, and other financial services companies to develop a standard that will interoperate across websites, devices, services and more.
In addition, just last week, the four major US carriers—in an extremely rare show of complete unity—announced the development of the Mobile Authentication Taskforce. This group will be responsible for developing a single, consistent method of authentication that both consumers and businesses can use to accurately identify people using mobile devices on any US telecom network. First results won’t be showing up until 2018, but this sounds like an enormously positive development.
The challenges of creating a viable, secure, and modern form of digital identity are extremely difficult, and even in spite of all the positive efforts I’ve listed here, there’s no guarantee we will have a viable option anytime soon. But as the events of the last week have hammered home, it is absolutely time to move past old ideas and embrace the opportunities that a digital identity can enable.
Are smartphones destroying our kids? That’s the premise of an extensive article in September’s The Atlantic.
The author, Jean Twenge, has been researching generational differences for 25 years, starting when she was a 22-year-old doctoral student in psychology. In this article she describes how the use of smartphones is so prevalent among the teen population, the generation she calls iGens, and how profound of an effect smartphones are having on social behavior, friendships, sex and more.
Her premise, based on extensive research findings, is that this generation is more comfortable online than out partying, and while physically safer, they’re on the brink of a mental health crisis.
She found that the iGens hang out much less with their friends most days, with the frequency dropping by more than 40 percent from 2000 to 2015. Teens are dating less, with just 56 percent of high school seniors going out on dates in 2015, down from 85 percent for the previous generations. And they have more leisure time but waste it, spending more time in their room alone, on their phones, often distressed.
With less dating, sexual activity has dropped, which is one of the positive findings. Among ninth-graders, sexual activity has dropped by almost 40 percent from 1991. “The average teen now has had sex for the first time by the spring of 11th grade, a full year later than the previous generation.”
But Tewnge also found that the iGens’ maturity level has fallen. “Across a range of behaviors—drinking, dating, spending time unsupervised— 18-year-olds now act more like 15-year-olds used to, and 15-year-olds more like 13-year-olds.”
While teen murder is down, suicides are up. “Teens who spend three hours a day or more on electronic devices are 35 percent more likely to have a risk factor for suicide. … Since 2007, the homicide rate among teens has declined, but the suicide rate has increased.”
The author’s research found that “teens who spend more time than average on screen activities are more likely to be unhappy, and those who spend more time than average on non-screen activities are more likely to be happy. … There’s not a single exception.”
All screen activities are linked to less happiness, and all non-screen activities are linked to more happiness. Eighth-graders who spend 10 or more hours a week on social media are 56 percent more likely to say they’re unhappy than those who devote less time to social media. Admittedly, 10 hours a week is a lot. But those who spend six to nine hours a week on social media are still 47 percent more likely to say they are unhappy than those who use social media even less. The opposite is true of in-person interactions. Those who spend an above-average amount of time with their friends in person are 20 percent less likely to say they’re unhappy than those who hang out for a below-average amount of time. Lastly, according to the author, the smartphone is reducing the number of hours teens sleep at night. Experts recommend about nine hours of sleep, but that number drops around the same time that most teens get a smartphone, with a large percentage dropping to less than seven hours, considered sleep-deprived. Fifty-seven percent more teens were sleep deprived in 2015 than in 1991 according to the author. One survey found that teens who go to social media sites every day are 19 percent more likely to be sleep-deprived and those who use their devices with screens right before bed are likely to get less sleep.
The article, accessible for free, provides a lot more details including some revealing graphs. Whether smartphones are good or bad may be debated, but it’s clear they’re profoundly changing teen behavior.
This Article was initially published by PJ Media, LLC at PJMedia.com