Over the weekend, I read this story arguing that our love affair with digital was over. Although I see some of the trends David Sax is outlining, I am not certain they can all be blamed on our growing distrust of technology. More importantly, I do not think technology per se is to blame here, we are!
The article brings up many negative effects of technology on our lives: stress, jobs loss, impact on human interactions. But when you think about it, it is not technology in itself to be at fault but rather how we use it.
Technology is not the Answer to Everything. We just like to think so!
Over the past few years, I think three major things occurred that impacted our use of technology. First, innovation happened at a pace that we had not really experienced before. Second, because of the lowering price of technology, a much broader number of people were exposed, either directly or indirectly, to such innovations. Finally, technology has also become more “human-friendly” making it easier for people to embrace it.
The problem with having so much technology at our disposal is that we started to think technology could solve any problem and could be the answer to everything. Education is a very good example. Many schools added gadgets of different kinds, but teachers did not change how they teach. They simply substituted paper with screens. Others added gadgets and apps but mostly to replicate what the teacher used to do, just maybe a tad more personal. The answer to more engaged, smarter, ready-for-life kids is to first look at how we teach and then look at how technology can help us deliver. We should not turn to technology first.
While we might be concerned about AI taking over our jobs we should recognize the many opportunities technology has enabled over the past few years. Apps and services that reached millions of people overnight as well as start-up incubators and a maker fair movement that allowed talent to stay where it was born while still accessing an international stage.
Sometimes, we also rely on technology out of convenience, after all, technology is supposed to make our life easier. There might be different ways to perform a task involving different levels of technology, but convenience drives us. This is fine when convenience meets effectiveness and drives the highest results, but it is less so when convenience is driven by laziness. The choice is on us not on the technology that is available.
The Limitations of Analog
Some of the technological changes we are undergoing are certainly scary especially if you are easily impressionable by movies like “Her”! When it comes to this analog revival, however, I don’t believe that some of the rediscovered love for books or vinyl is necessarily a rejection of technology. The revival of books has a lot to do with the increased popularity of local bookstores where customers feel they are supporting a local business, they build a relationship with the people who work there and get more personal recommendations. In a way, this is no different than the growing popularity of local, independent coffee shops.
Is the love for vinyl really about technology and sound or a broader statement about what music as an industry used to be? The other side of the coin though, is that technology empowered artists that might have never had the means to become a worldwide success – the UK band Glass Animals is a good example.
As far as human interaction, it certainly does not have to disappear because of technology. While there might be some of us that have become more comfortable chatting on messenger than in real life, many still enjoy grabbing a coffee or a beer with friends. Not having access to technology, however, would make social interactions impossible for many people. The classic video chat ad that shows grandparents and grandkids is a reality at our house, as my mom lives in Italy and we only see her once a year. Technology is how we remain part of each other’s lives, in a much vibrant way than old phone calls allowed us to do.
Technology is also what allows people with disabilities to have richer social interactions and more widely live a fuller life. Think how 3D printing is revolutionizing prosthetics or how support for voice over allows visually impaired people to read or see through the description given to them.
Turning Our Love Affair into a Happy Marriage
I do realize that talking about turning our love affair into a happy marriage only shows I have been in a relationship for a long time! But I do think this is the key. That passion and excitement resulting in sleepless nights, stomach butterflies, lack of appetite that spark the start of many relationships turn into a more “sustainable” set of feelings that does not make it less “love” but maybe allow us to get a bit more in control of our life and the relationship itself. I feel the same is true with technology.
After being swept off our feet by what smartphones enabled: always on, an app and service culture, social media, we need to regain some control. We need to pace ourselves, find some “me time”. Such balance can be reached in different ways, either by embracing analog again or actually using more tech. At the core, however, such balance will only be possible if we understand that we are in control and not controlled by the technology that surrounds us. We have the power to unplug!
With the StitchFix IPO recently, I’ve noticed a re-emerging narrative around the ‘new retail’ theme. StitchFix falls under this new retail theme, as I’d argue does Dollar Shave club (purchased by Gillette), Bonobos (purchased by WalMart), and Trunk Club (purchased by Nordstrom). You have probably seen many ads on Facebook as well for companies in any of these spaces trying to build a brand around this new retail theme.
One of the most widely praised concepts to emerge from the tech industry over the last several years is ridesharing, particularly through the services of companies such as Uber and Lyft. Not only is having the ability to request a ride and have it promptly show up nearly wherever you are a great service that millions now enjoy, it is also a textbook example of how the disruption of a traditional industry—in this case, the roughly century-old taxi business—can enable new types of business opportunities that couldn’t exist before.
As great a concept as ridesharing may be, however, there are increasing signs of strain on the business model that ridesharing companies use, particularly regarding costs and technology timeframes.
Before digging into these concerns, it’s important to remember the extent of influence that ridesharing has had on the tech industry overall. In fact, ridesharing’s enablement of the big picture concept of “transportation as a service”—where people can forego the purchase and ongoing maintenance of an automobile, and request rides whenever and wherever they need them—arguably has led to an enormous range of “as a service” offerings, most of which seem to suggest they’re the “Uber of something.” Of course, with regard to ridesharing specifically, arguably, some urban dwellers have lived this way for decades, and simply used taxis to get from place to place. With ridesharing apps and services, however, the process is significantly easier for a much wider group of people and—for now at least—much less expensive.
Part of the reason for the lower prices is the significantly different business models and expense structures between taxi companies and ridesharing companies. Because drivers for ridesharing companies aren’t employees, and therefore aren’t entitled to regular salaries, benefits and other costs associated with personnel, the overhead costs for them are significantly lower than they would be in other businesses. Throw in the fact that these drivers are using their own cars, and the physical asset-related expenses of most ride-sharing companies are almost zero. Traditional taxi companies, on the other hand, typically have to cover all of these types of expenses.
Recent legal challenges to Uber in London (and potentially much more of Europe), however, clearly highlight a potential flaw in the “gig economy” independent contractor business model upon which ridesharing companies are so dependent (and which they created). If European laws are changed to force companies to officially hire these contractors, the costs to ridesharing companies could skyrocket. Plus, it’s not inconceivable that changes in one region could quickly migrate to other regions, causing a much larger impact than a single legal requirement might first suggest.
Ironically, part of the problem is that these ridesharing services are a victim of their own success. So many people have become so dependent on driving for these services—either full-time or significant part-time efforts—that the ridesharing companies are seen as providing significant amounts of income to a large and growing group of individuals. The longer that process continues, the more dependent drivers will become on these ridesharing companies, and the more likely that employment with these entities starts to become a political issue with even more far-reaching ramifications.
In theory, of course, this latter problem was never supposed to happen. Built into the business model of the ridesharing companies was an “inevitable” evolution to a fleet of self-driving cars that wouldn’t require any drivers. The drivers were only ever meant to be a temporary solution until the “real model” of an on-demand pool of autonomous cars was available. The problem is, the timeframe to reach truly autonomous cars looks to be lengthening. Despite some of the frothier media commentary to the contrary, it’s becoming increasingly clear that the technical, logistical, regulatory, insurance, and even ethical hurdles that still face fully autonomous vehicles are extremely high. As a result, it could be well into the 2020s before the key technological and legal conditions are in place to make fully autonomous vehicles a mainstream reality. We’ll see plenty of experiments before then, but as soon as the seemingly inevitable first serious accident involving an autonomous car occurs—regardless of where the true fault lies—the process will once again slow.
The challenge with this timeframe is that it means the amount of people and the amount of income that will be impacted as ridesharing companies start to move away from human drivers and towards autonomous fleets is going to be enormous. The nightmare scenario for these companies is that the transition from an independent driver model to one based on an autonomous fleet lasts so long that legislators end up feeling the need to step in. Large numbers of their constituents could end up being impacted by such a transition, leading to demands for political action, all of which could slow down the transition even further.
This is why I expect we’ll see a number of announcements similar to the recent Uber-Volvo arrangement about autonomous car partnerships. Certainly, the fact that Uber is working with a major car manufacturer like Volvo on autonomous cars is important, but it’s arguably also an effort to get people thinking about the transition to autonomous vehicles much sooner than is realistically possible. By driving the discussion towards the next stage in the ridesharing industry’s business model evolution, the announcement deflects attention away from what could be more pressing business model challenges in the near term.
There’s no question that ridesharing is a tremendously useful and, for many, essential addition to the range of service offerings that tech companies provide. But as the industry matures, there could end up being a number of unintended consequences stemming from ridesharing’s once revolutionary business model. Any combination of these consequences could force people to re-evaluate what the industry’s long-term opportunity may really be.
Last week I participated in the UBS global technology conference where I talked broadly about Apple but also had some fascinating conversations with many of Apple’s key investors. In the course of these Apple focused conversations, many where I was sharing quite a bit of our recent research, a theme stood out to me that is worth exploring. This theme centers around Apple’s user base but focuses more on how subtle yet significant behavior changes happen within Apple’s customer base which we don’t see around other products.
If you’re a parent, you’ve probably been put in the frustrating position of competing with your child’s phone for attention – and losing. Intermittent silences, grunts or one-word responses, fast-typing fingers – all telltale signs that he or she has found something more interesting than you. It’s often not even a close contest.
An article in The Atlantic drew attention to the dangers of unlimited smartphone use by teenagers – and smartphones are pervasive, as 80 percent of households now own at least one smartphone, according to Consumer Technology Association research. Since smartphones became ubiquitous about ten years ago, the time our teens spend with friends and on dates has dropped dramatically, while loneliness, lack of sleep and mental health issues have risen sharply.
Most adults are not digital natives. We’re new to the benefits – and challenges – of anytime/anywhere connectivity. Many of us have failed to set effective boundaries and are just as distracted as our children. To be the role models our children deserve, we must create a healthy household culture, including how and when we use technology. Our grandparents did this with pinball games and Pong, and our parents did it with Pac-Man and Minesweeper. We can also set limits.
First, that means insisting that conversations and relationships are a top priority. Practically, this will look different for each family. Some might find it useful to create a tech-free zone in their house or car, where no phones, tablets or laptops are allowed. Instead, play car games or listen to audiobooks. Others might set a limited amount of screen time a day, using parental control apps to set time limits. Whatever rules or boundaries you create, the goal is the same: reconnecting with your kids by temporarily disconnecting from your devices. My wife and I ban all electronics but eBooks in our children’s bedroom and limit our nine year old’s phone usage to an older phone that can only operate using Wi-Fi.
But creating a healthy household tech culture isn’t just about controlling the potentially negative aspects of tech devices. We should find new ways technology can lead our children to healthier, happier lives. Wearables, for instance, partner with devices and allow kids to track their health and physical fitness, and better train for their favorite sports. They’ll know how to monitor their wellness and watch for signs of oncoming illness – and they’ll be able to give doctors more precise information about their symptoms.
Wearables can even help parents of autistic children predict and prepare for episode triggers. Reveal, a wearable designed for kids with autism, closely tracks the signs of mood shifts and lets parents know when their children are on the verge of sensory overload. In the future, this type of device could be used to help kids with anxiety, cerebral palsy and other health issues.
Digital devices can also be used to foster our children’s creativity and expose them to new ideas. It used to be that a small group of big TV networks decided what our kids would watch – but thanks to the internet and tech enabling content creation by all kinds of artists, kids now have a nearly endless array of options. You probably haven’t heard of that band your teen is listening to – but then again, neither has half of his or her friends. And many of these options come from unknown global artists and creators, instead of just the entertainment giants.
Smartphones and tablets let kids connect, create and collaborate. Social platforms allow them to share their work and find likeminded peers who share similar interests. Twenty years ago, if you were the one kid in the neighborhood who liked to make movies, you’d be all alone in your hobby. But thanks to today’s digital devices, you can find other young directors to share ideas and techniques.
And connected tech can help parents keep their children safer. Location apps, for instance, offer parents an unprecedented amount of child supervision. Ceaseless questions like “Where are you going?” and “What are you doing?” disappear when parents can simply check their phones and see where their kids are. We can also track our kids’ digital whereabouts – the sites they visit, the content they watch – through parental control apps and software, preventing kids from inadvertently wandering to unsafe or unsavory corners of the internet.
It’s easy to get worried or frustrated when you try to have a conversation with your child and all you get is a dismissive glance and curt response. But remember: technology is a tool that can be used for good or bad, in excess or in moderation.
Before we start wringing our hands over technology’s influence on the next generation, we need to take a hard look at our own tech habits. One teen in The Atlantic piece said of her own generation, “I think we like our phones more than we like actual people.” What about the rest of us? Are we using technology creatively and actively, or are we passively and idly letting our technology use us?
With tech, as with all other innovative tools, it’s up to us to figure out how, when and where best to use them – and then show our children how it’s done.
This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing Tesla’s new Semi electric truck, their new Roadster sports car, Apple’s delay of their HomePod smart speaker, and expected Black Friday tech device shopping trends.
As we head into the latter part of this decade, it appears that the ‘digital divide’, which has historically referred to the haves and have-nots of broadband, is hitting wireless services, as well. This theme has crystallized in my mind over the past week, having been part of three important wireless-related events: The Telecom Infrastructure Project Summit, spearheaded by Facebook; the Qualcomm/T-Mobile launch of Gigabit LTE; and an industry analyst day hosted by leading infrastructure vendor Ericsson, which focused mainly on 5G and IoT.
It looks to me like the world is separating into four ‘tiers’ of wireless service. In Tier 1, you have the United States & Canada, Japan, South Korea, and, increasingly, China. These countries have 70% plus of their customers already on 4G LTE, and are rapidly moving along the LTE Advanced path toward Gigabit LTE. They are also likely to be among the first to deploy initial 5G services. A healthy (but not unhealthy) level of competition, and high income correlate here, not only with regard to wireless service spend but also on the most advanced handsets that take advantage of the best LTE has to offer.
Then there’s Europe, which has fallen a step behind. A decade ago, if you traveled to Europe, you’d marvel at how good wireless coverage was in comparison to the U.S. But Europe has lagged on the depth and breadth of LTE deployment. This has been a huge change from the 3G era, where many European countries were among the leaders. There are a multitude of reasons, but chief among them are a somewhat stagnant economy, overheated competition that has depressed spend (and as a consequence, capex), and where the epicenters of wireless innovation shifted from Europe to the U.S. and Asia. Actually, like an airplane circa 2017, Europe has more like two classes of economy: basic economy (some countries, and many areas outside cities), where good LTE services are still lacking; and premium economy, where 4G is closer to the top tier.
Tier 3 is where the fastest subscriber growth is. But it’s easy to forget that with our $1,000 smartphones and hype around LTE Advanced and 5G, many countries are just getting to 3G. Take Africa: Only 50% of the continent has access to 3G coverage, and whereas we have phased out 2G here, it is critical for voice on that continent. In fact, most of the deployment in Africa over the next 5-10 years will be 3G (because 4G remains too expensive). In India, 69% of the population is still on 2G, although that is changing, and rapidly. Similar story in Latin America, but slower pace of change. Getting mobile connectivity to folks in these areas is critical, since wireless is likely to be their primary form of Internet access for the foreseeable future.
Expanding and improving connectivity to these regions is the major focus of the Telecom Infrastructure Project (TIP). Spearheaded by Facebook but now consisting of some 500 members, the objective of TIP is to connect the next 1-2 billion people at much lower cost than your typical $150,000 base station, using develop an open, software defined network platform. Although still in its early stages with respect to deployments, TIP will at least push, if not disrupt, the incumbents. If TIP is successful with the new LTE OneCell, it might accelerate Tier 3 countries’ upgrade or leapfrog to 4G.
In addition to disruptive infrastructure, innovative business models are needed. One oft-cited example is more of a partnership arrangement with operators, where revenue sharing arrangements could help fund projects. Then there’s Reliance Jio, which has disrupted the Indian market by focusing on alternative revenue streams, rather than trying to finance a build of hundreds of thousands of cell sites on the back of sub $5 per month ARPU.
Finally, there’s the ‘connecting the unconnected’ tier. This is still the intractable segment of the market, where a lot of effort is being expended but no viable, scalable connectivity solution yet exists. Developing cheap base stations doesn’t solve all the problems here. The main challenge is power and backhaul. The lack of a reliable power grid, inaccessible roads, issues of on-site equipment theft, and even the lack of commercial/network data makes planning difficult. It will take something different to get to the “last 1-2 billion”. Google Loon, OneWeb’s planned satellite service, and other ‘airborne’ solutions are all possibilities, but it will still be several years before we know whether these are viable options, at scale, and can deliver the sort of speed and capacity that will be at least in the ballpark of 21s century infrastructure.
So, four themes from “telecom infrastructure week”: 1) the rich will get richer, as 5G will be driven by the already haves; 2) China will play a much bigger role in 5G innovation that it did in 4G—in infrastructure, chipsets, IoT deployments, and even driving global spectrum bands; 3) getting connectivity to the next 1-2 billion subs in Africa, Asia, and Latin America will have to be done in a dramatically less expensive fashion; and 4) reaching the ‘last 1-2 billion’ remains a yet unsolved challenge.
Pixel Buds Reviews are in and They aren’t Very Kind
The Pixel Buds were announced together with the Pixel 2 in early October at an event in San Francisco. The Pixel Buds are a wireless set of earbuds with a circular design and a cord to wear behind the neck for extra security when not using them. They feature gesture controls, for music, phone calls, or adjust volume and of course activate the built-in Google Assistant. All for $159.
I’m in a contemplative mood this week, as this is my final column for Techpinions, and as such I thought I’d share a few big picture thoughts on the state of the tech market in late 2017. It strikes me that from a consumer perspective, in many ways we’ve never had it so good, but at the same time there are new threats and concerns which are also unprecedented. We will therefore be tempted to seek regulatory remedies and limits on the power of tech companies and technology, and while some of these may be worth pursuing, there’s also a danger that we politicize technology and undermine progress even as we seek to protect consumers and startups.
We’ve Never Had it so Good
First off, I’d argue that as consumers of personal technology we’ve never had it so good – the devices we have access to are unprecedented in both their raw power and in their specific capabilities, from cameras to connectivity to displays and audio. And the key thing here is that no single manufacturer either dominates sales or has far and away the best devices: one of the best things about the current state of the market is that consumers have a number of great options in key categories from smartphones to PCs to tablets and TVs. On the smartphone front, Apple and Samsung make the most and arguably the best premium smartphones, but new players like Google and Essential are creating promising new entrants, while the old guard including LG and others continue to produce interesting devices too. On key features like cameras, Apple, Samsung, Google and others all have great performance and it’s mostly a matter of personal preference rather than objectivity which is best.
In the smartphone market in particular, it’s also notable that consumers don’t have to spend the $700-plus that’s now required to buy a top-of-the-line smartphone in order to have a great experience. There are less powerful but still serviceable smartphones available at nearly every price point from $50 to $800, making this technology available to consumers throughout the world and thereby transforming lives and economies. All of this is also true in other categories like tablets and PCs, though low-end PCs still tend to prove the maxim that you get what you pay for more than other categories of consumer hardware.
Technology is an Enormous Force for Good
That last point is worth expanding upon: not only is our technology great, but it has done great good in the world, connecting people with each other and other resources as never before, opening up a world of information and content to anyone, anywhere, on the device of their choosing. The Internet has both allowed even the smallest publisher to reach massive audiences and allowed tiny interest groups to find comradeship across the globe. Technology is connecting families, giving opportunity to poor and otherwise marginalized populations, including the disabled and ethnic minorities.
But It Has Also Created Worrying Side Effects
None of that is to say that technology has created unalloyed good in the world. Many of the same enablers that have permitted innovation, positive communication, community building, and more to flourish have also fed conspirators of various stripes, trolls, and other bad actors and their ability to do nefarious work. Platforms designed to allow people to connect in positive ways have also enabled the spread of misinformation, harassment and abuse, and more recently even meddling in elections. It’s clear that we’re only beginning to discover the scope and potential of some of the negative effects of technology in our lives.
Meanwhile, tech as an industry is characterized by other unpleasant characteristics, notably a lack of diversity and a tendency to downplay or ignore the potential of new technologies for evil as well as good. Too often Silicon Valley demonstrates its lack of diversity in its lack of understanding of how its inventions will impact marginalized populations or even the population as a whole. Its self belief is one of its greatest strengths but also one of its greatest weaknesses. I’ve also pointed out that, with few exceptions, the largest companies in the industry are dominant and threaten to continue to squeeze out innovators.
Regulation is a Tempting Solution
In light of all this, voices from both sides of the political spectrum in the US and beyond are calling more loudly for regulation of big tech companies, whether on antitrust, content, advertising transparency, or other grounds. Some of these calls have obvious merit, and would bring the tech industry in line with older industries that provide similar functions. But my biggest worries with tech regulation are always that those writing the laws have an imperfect understanding of the market and that the process is so slow as to be ineffective in dealing with real problems while often creating unintended consequences. I’m also increasingly aware that in some of these debates a key constituency – media – has an inherent conflict of interest because it’s threatened by some of the very platforms it covers.
I’m hoping that we don’t see knee-jerk, often politically-motivated calls for regulation resulting in laws that would limit the ability of companies to innovate while not really solving the underlying problems. I have little faith that the current US political leadership will get anything meaningful done here without screwing it up, while the bigger threat to US tech currently comes from the EU and its efforts to punish big US tech companies for underpaying taxes and squeezing out local competitors.
A Promising Future
I’m inherently an optimist, and that optimism extends to the tech industry and the role of technology in our lives. I’m not naive enough to think that all the issues will merely go away, but on balance I think the positive benefits will be greater than the drawbacks, and humanity as a whole will continue to benefit enormously from the advances that will be made, especially in areas like healthcare, where consumer tech companies are just starting to scratch the surface of what’s possible. AI and machine learning bring their own threats and downsides, but I tend to think the more apocalyptic voices here are off the mark, while there could be significant benefits from smarter technology in our lives too.
Google is the new kid on the block with smartphones, but they are being treated like an established player. Their sales of the Pixel 1 and likely sales of Pixel 2/XL are likely to be very small, maybe 5m units total in 2018, but because they are Google, they are being taken seriously. While I completely agree Google should not be graded on a curve, and many media outlets are slowly waking up to this reality, I am willing to give them the benefit of the doubt for at least the next few years.
On numerous occasions, Apple CEO Tim Cook has stated that he is extremely excited about AR and believes it will usher in a new era in mobile computing. I have done many interviews on Cook’s comments with media and industry folks about Apple’s overall optimism about AR and I tell them that I believe Apple is moving in a very calculated manner when it comes to how their AR strategy plays out.
Apple is serious about content. You just need to look back at the past year to see not just their ambition but also it’s investment in this space.
Back in June, Apple went on a hiring spree. First, with the former head of Amazon’s Fire TV business D. Twerdhal and then, with Jamie Erlicht and Zack Van Amburg, two Sony Pictures executives hired to oversee all aspects of video programming and reporting to Eddy Cue.
In August, the Wall Street Journal reported that Apple was planning on spending roughly $1 billion to procure and produce original content over the next twelve months. So far, this year, Apple produced two original series “Planet of the Apps” and “Carpool Karaoke” which were only received mildly.
In October, news broke that Apple had struck a deal with Steven Spielberg for the updated version of the Amazing Stories series. According to the WSJ report, Spielberg will be producing ten episodes of the original series with a budget of $5 million a pop.
Finally, just last week, Apple was said to have ordered a yet to be titled morning show drama series which will be executive produced by Jennifer Aniston and Reese Witherspoon. The series is written and executive produced by Emmy-nominated Jay Carson (“House of Cards”) and CNN reporter Brian Stelter will consult on the project drawing from his book “Top of the Morning: Inside the Cutthroat World of Morning TV.”
When I saw the news about this last show, I jokingly asked on Twitter where I could watch it. This to me is a serious question Apple must address as it plans to create more content.
Reaching a Broader Base than Apple TV can offer
If Apple is planning to spend $1 billion in content, surely the hope is to reach as broad a base as they can. Apple has been trying to find the right formula for its “TV Hobby” for some time now.
It first focused on Apps, but the magic that apps brought to the iPhone and iPad failed to materialize with Apple TV. Apple then turned its attention to “fixing TV” by improving content access by enabling single sign-on. The issue with that is that TV providers like Comcast are also focusing on making it easier for users to find their content and they market features such as voice control quite heavily. Digital Assistants like Alexa and Google Assistant are also getting in the game.
While Apple has not shared Apple TV sales numbers for a while, it is reasonable to expect that even with the latest update that brought 4K support, Apple is still not seeing the numbers that would guarantee a broad enough audience.
This would explain why the first attempts of produced content were distributed via Apple Music rather than Apple TV. iPhones and iPads offer Apple a much broader base for its content. Making the content part of the subscription is also, of course, a good way to reward subscribers. However, I don’t think this would be a viable long-term solution for Apple. Apple must decide whether it is serious about TV in the home – especially as the little box doubles as a connected home hub – or if it is serious about creating content and competing with Amazon, Hulu, and Netflix. While the two are not mutually exclusive, Apple could also decide to create a new service that is not tied to Apple TV. iTunes is too tired as a brand to help Apple in its endeavor, and Apple Music should be about music. Also as costs increase, Apple could not just roll this new content into the current Apple Music subscription. Of course, as consumers have choices being able to offer an all-inclusive subscription for video and music at a competitive price could be a differentiator for Apple, at least over some of its competitors.
Access to the Right Content….
Aside from producing its own content, Apple has also been in talks with Hollywood studios to get earlier access to movies to be distributed at a premium price on iTunes. So far, studios received stiff resistance from theaters due to the significant loss they would be the money that we all spend on popcorn and other concessions more so than the number of paying customers for the movies themselves.
For the studios, the biggest problem would be to guarantee that the content could not be easily pirated. Although, earlier wider availability might lower in-theater piracy. While iTunes encrypts video, one could always record the movie from an external device such as a phone. Screening Room, a new service that Napster’s founder Sean Parker is trying to create that also allow for early viewing of movies still in theater used a watermark which, while not deterring piracy, makes it trackable and therefore punishable.
For consumers, a rental price of between $25 and $50 per movie would still represent a very competitive price compared to what a movie outing usually costs. The service would also speak to changing consumers’ behaviors. Larger, high-definition TVs are dropping in price making that home-theater experience a reality for more and more consumers. At the same time, the theater experience has not improved in a way that many consumers would consider proportional with the price hikes for 3D and Imax. It only takes going to a couple of popular movies to see that the longer line is usually for the regular screening rather than the 3D or Imax screening.
….and a New Ways to Consume Content
Content consumption is also changing. AR and VR open up opportunities to experience content in a different, more immersive way. Tim Cook has been very vocal with his believes that AR offers a much broader opportunity than VR and I tend to agree with him. Yet, I do believe that VR offers a great opportunity to deliver a premium content consumption experience as well as new content altogether. Anything from a behind the scenes tour of a movie, to a meet a greet the stars. VR also plays well with music and sports by providing access to concerts and events. Apple’s attempt to engage users with artists through Apple Music was not very successful but who would say no to having their favorite artist perform for them in their living room?
With rumors around a possible set of Apple Glasses, I can see Apple offering this as a premium service for the home.
With the Apple Glasses a couple of years away, Apple needs to decide if the pool of consumers interested in such content today is big enough to start thinking about Mac support for VR. While the number of Mac users is a drop in the ocean compared to Windows, they represent a much more profitable target for content providers and developers, one that should not be overlooked.
There are some interesting discussions happening, some public but many behind closed doors, about the automotive industries role within the broad category of the ride-sharing economy and autonomous fleets. You may not connect the dots to draw parallels between the ride-sharing economy and autonomy, but they are more closely related than many realize.
The tech industry’s lack of diversity and mind-numbing sea of sameness when it comes to opinions are, unfortunately, now widely recognized. But there is a subtler, and lesser-known limitation in tech that, I believe, is also having a devastating influence on the industry: the lack of liberal arts graduates.
As the proud graduate of a quintessential liberal arts program—Notre Dame’s Program of Liberal Studies, which combines literature, philosophy, theology, natural sciences, history and more into a Renaissance-style general education via a study of the “great books” of both Western and Eastern civilizations—I’m unquestionably biased in my perspective. Nevertheless, it’s becoming increasingly clear that the lack of intelligent reflection, discussion, and debate on why and for what purpose technologies are being developed and applied in tech industry products and services needs to be addressed. Even an ethnographically diverse set of engineers and other tech-focused individuals can’t always see, nor understand, some of the challenges that today’s tech products are bringing to the fore.
On the other hand, while no two liberal arts programs are the same, the one consistent thread across them is that they teach people to think critically, ask these essential why questions, and work through the implications and longer-term impact of ideas and concepts, particularly as they relate to people. Applying these kinds of human-centric principles to tech could make a profoundly important impact.
Consider, for example, where social media has brought us as a society. From a scientific and programming perspective, it’s clearly impressive to be able to not only link billions of people around the world and let them communicate with one another, but to use advanced computer science to create algorithms that can continuously feed each one of us with the kind of information that specifically interests each one of us (in theory, at least).
However, a liberal arts major familiar with works like Alexis de Tocqueville’s “Democracy in America,” John Mill’s “On Liberty” essay, or even the work of ancient Greek historians, might have been able to recognize much sooner the potential for the “tyranny of the majority” or other disconcerting sociological phenomena that are embedded into the very nature of today’s social media platforms. While seemingly democratic at a superficial level, a system in which the lack of structure means that all voices carry equal weight, and yet popularity, not experience or intelligence, actually drives influence, is clearly in need of more refinement and thought than it was first given.
Beyond these more philosophical debates, there are an increasing number of very practical concerns around the ethical application of technology in fields ranging from medicine to transportation to basic data analysis. Toss in the mind-numbing array of questions that arise from technologies like artificial intelligence (AI) and machine learning, and it’s clear that there’s a lot more discussion that needs to happen around how technologies get applied, rather than just how to build them.
Given the already enormous impact that technology has in our present lives and the inevitable increases that will occur, there needs to be more thoughtful analyses about the roles technology can and should play. It’s also important to recognize that the kinds of exciting technological developments that we have now (and will have much more of in the future) affect all people—not just the types who are currently doing much of the development work. That’s why it’s so critical to increase the diversity of opinions, experiences, and perspectives of people working to bring this technological future to life.
The greater the variety of voices—not only from a gender, race and ethnographic perspective, but an educational one as well—the more balanced, successful and long-lasting the choir of “future creators” will be.
As I’ve mentioned before, one of the markets I follow closely is the US wireless market, with a focus on the four largest network operators. These operators continue to be by far the largest channel for smartphone sales in the US, and what I’ll share today is a mix of insights on the wireless market itself and the implications for the smartphone market.
But let’s be fair. The title of an article is often clickbait that is not representative of the article’s contents. So, is that the case here?
Have we reached peak phone?
I would argue that we are indeed standing on the summit of peak “phone as hardware….”
Yowza. That’s quite a claim. Let’s take a look at the Professor’s reasoning. (I’ve added numbers to the sections on “Theory” and “The Next Vector” for added clarity)
1.1 To understand the future of phones, it helps to look at the history of…innovation.
1.2 Innovation in technology product categories tends to proceed along a specific dimension—a “vector of differentiation.”
1.2 Players pursue innovation along a vector of differentiation until the vector runs out of steam.
1.4 This happens for two reasons: limits to innovation along the vector of focus and the ability of competitors to catch up with market leaders.
1.5 (W)hen vectors of differentiation shift, … the focus of innovation shifts to a different vector and new market leaders emerge, … incumbents often get left behind and market leaders tend to fall by the wayside.
2. The Next Vector
2.1 Now, the vector of differentiation is shifting yet again, away from hardware altogether.
2.2 Sheets of glass are simply no longer the most fertile ground for innovation.
2.3 We are on the verge of a major shift in the phone and device space, from hardware as the focus to artificial intelligence (AI) and AI-based software and agents.
2.4 This means nothing short of redefinition of the personal electronics that matter most to us.
2.5 The shifting vector of differentiation to AI and agents does not bode well for Apple.
Well, of course, this “does not bode well for Apple” because nothing ever bodes well for Apple.
3. AI Leaders
In the brave new world of AI, Google and Amazon have the clear edge over Apple.
Oh brother, here we go.
Amazon is making rapid progress along this vector of differentiation, as are Google (with its TensorFlow open-source platform for AI apps) and even Microsoft.
In other words, everybody’s making progress in AI. Except for Apple. Because, you know. They’re Apple.
I have some nits to pick with the Professor’s underlying premises. Have Smartphones really stagnated? Even if AI is the future, are we sure what that future will look like? And are we sure that the AI future is upon us here and now or is it still, you know, in the future? And what makes the Professor think that moving ahead toward AI necessarily means leaving hardware behind?
Inquiring minds want to know.
5. Predictions About the Future
Predictions are hard, especially ones about the future. ~ not Yogi Berra
The thing is, we can know the broad outlines of the future without having any inkling about what the specific details of that future are going to be.
— Everybody knew that cars were the future, but while everyone else was trying to make a better car, Ford made a better assembly line.
— Everybody knew that personal computers were the future, but while everyone else was trying to make a better computer, Microsoft made a better operating system.
— Everybody knew that mobile computing was the future, but while everyone else was trying to make better phones and tablets, Apple made a phone that was a tablet.
6. The Race
If the age of AI is upon us, where is the assembly line of AI? Or the Windows 95 of AI? Or the iPhone of AI?
Saying the age of AI is upon us is like saying that the age of mobile was upon us when Microsoft introduced their first tablet in 2000 or when RIM introduced their iconic Blackberry phone in 2002.
The mistake we commonly make is to talk about who is “ahead”. But like Microsoft with the tablet and RIM with the phone, it doesn’t matter how far “ahead” one is if they’re running in the wrong race. Microsoft, RIM, Nokia — even Palm — were ahead of Apple in the mobile phone race. Apple wasn’t even in the running. But Apple reset the game by starting a new race — the smartphone race. And in the smartphone race, Apple obtained an insurmountable lead while the incumbent mobile phone leaders were left helplessly behind, in part because Apple got there first, but just as importantly because the mobile phone incumbents didn’t know the new race had begun or didn’t know the new race was important or didn’t even know where the starting line was.
7. The Ladder
If the ladder is not leaning against the right wall, every step we take just gets us to the wrong place faster. ~ Stephen R. Covey
I hope you’ll forgive me, but let me use one more metaphor to drive home this point because I think it’s important.
It doesn’t matter most how high you climb the ladder of success. What matters most is whether your ladder is leaning against the right wall. In mobile technology, Microsoft, RIM, etc. were at the top of their respective ladders. But with the smartphone, Apple leaned their ladder against the right wall.
Google, Amazon, Microsoft may or may not be “ahead” in AI. But that only matters if they’ve placed their AI ladder against the right wall.
8. From Hardware
One of the Professor’s most baffling assertions is that the dawn of AI must necessarily coincide with the sunset of hardware. The following quotes from his article exemplify this attitude (numbers and emphasis added):
8.1 “(T)he vector of differentiation is shifting yet again, away from hardware altogether.”
8.2 “We are on the verge of a major shift in the phone and device space, from hardware as the focus to artificial intelligence (AI) and AI-based software and agents.”
8.3 “(W)e shift from hardware-based innovation to differentiation around AI-driven technologies.”
8.4 “Apple is falling behind in the AI race, as it remains a hardware company at its core and it has not embraced the open-source and collaborative approach that Google and Amazon are pioneering in AI.”
9. Premise Refuted
The proposition that a move toward AI is a move away from hardware is refuted right within the article itself. Note how even as the Professor praises Amazon and Google for their AI prowess and their AI promise, he does so by referring — at least in part — to how their HARDWARE will use AI. (Again, the added emphasis is mine.)
The advent of Amazon’s skill store and similar innovations speak to the need to create an AI-rich ecosystem where hardware, software, and third-party contributors work in concert to enhance consumer experience across life domains.
Consider Google’s Pixel 2 phone: Driven by AI-based technology, it offers unprecedented photo-enhancement features and deeper hardware-software integration.
As it happens, Ben Thompson — who was a student of the Professor’s — was thinking along the same lines as I. (Great minds, and all that.)
“The presumption is that the usage of Technology B necessitates no longer using Technology A; it follows, then, that once Technology B becomes more important, Technology A is doomed.”
“In fact, though, most paradigm shifts are layered on top of what came before. The Internet was used on PCs, social networks are used alongside search engines. … In other words, there is no reason to expect that the arrival of artificial intelligence means that people will no longer care about what smartphone they use.”
Ben Thompson’s entire article on this matter is well worth a read. You can find it here.
10. Premise Disputed
Let’s re-review the Professor’s basic chain of logic (using my words, not his):
— Apple is a hardware company; and
— AI is the future; therefore
— The move from hardware to AI will leave a hardware maker, like Apple, in its wake.
As I’ve already pointed out, the chain of logic is flawed because the move toward AI is not a move away from hardware.
Furthermore, did you notice anything else odd about the Professor’s assertions? His argument is founded upon the premise that Apple is a hardware company. But what industry expert worth their salt would describe Apple as “a hardware company”?
Apple is not just a maker of hardware like, say, HTC, or LG. Apple makes the whole widget. They make both the hardware and software; both the phone and the operating system; both the iPhone and the iOS. What makes Apple unique is that they are a provider of integrated solutions.
Why does that matter? Why does it matter that Apple makes both the hardware and the software? It matters because saying AI is unrelated to hardware makes little sense. But saying AI is unrelated to software makes no sense whatsoever.
The Professor, I think, has it exactly backward. He thinks that AI is going to somehow be independent of phones, and therefore companies like Apple are going to suffer. But isn’t if far more likely that devices like the iPhone are going to be the platform that AI builds upon?
11. Apple AI
Next-generation devices will use AI and deep learning to recognize our voices, faces, and emotions. – (the Professor)
We don’t have to wait until the next generation of devices to use AI for those purposes. Apple does most of those things already.
It seems to me that the Professor did not look carefully at the iPhone X before he wrote his article. He’s not only ignoring the phone’s future possibilities, he’s also ignoring its present capabilities. Apple is baking AI right into their chip design. And Apple uses AI, for example, to allow Face ID to adjust to changes in one’s face over time. The iPhone X is chock full of AI.
I can’t fathom the idea that Apple is behind in AI. Its devices are packed it with it. ~ Joshua Gans, @joshgans
12. Apple AR
As AI-driven phones like Google’s Pixel 2 and virtual agents like Amazon Echo proliferate, smart devices that understand and interact with us and offer a virtual and/or augmented reality will become a larger part of our environment. Today’s smartphones will likely recede into the background.
I think the Professor has gotten this backward too. Where he see’s stagnation in iPhone innovation, I see the potential for dynamic growth.
The Professor bemoans the fact that Apple is falling behind in AR. Maybe I’m missing something here (I’m not) but it seems to me that with the iPhone 8 and, in particular, with the iPhone X, that — far from falling behind — Apple has taken a substantial lead in the practical implementation of AR. They and they alone have the hardware and the software chops necessary to create AR that actually works for millions upon millions of people, right now, today. What’s more, Apple’s already substantial lead in implementing AR in already existing products may be about to become much, much bigger.
In 2007, Apple introduced the iPhone to the world. But it wasn’t until 2008 — with the addition of the App Store — that the iPhone’s full potential was revealed. It was then that the iPhone went from a being a proprietary device, made only by Apple, to becoming a platform, available to all. And that changed everything because a platform allows one to harness the abilities of others. And not only do those others create things for you without getting paid by you, you actually get to charge them a percentage of their profits for the privilege of doing so!
From the beginning, the iPhone was a blank canvas. But the APIs necessary for developers to create apps was the paint. The App store invited the most creative minds in tech to create things that Apple could never, themselves, have imagined. And Developers accepted the invitation with enthusiasm and proceeded with gusto.
That was 2008. This is 2017. And in 2017 Apple may — at least in part — have replicated the miracle of the App store, all over again. With the App Store, Apple created a canvas for Apps. Today, Apple has created a new canvas suitable for the creation of AR. And there are literally tens of thousands of developers who are focusing their efforts on painting the next Mona Lisa of AR
Just as an example of what is already possible, Warby Parker is using face mapping on the iPhone to provide glasses recommendations.
And, as the following headline attests, Animoji Karaoke is a thing.
Animoji karaoke is the new iPhone X feature taking over the internet ~ Evening Standard
It’s hard to believe that Apple will not, in future iterations of the iPhone, use the front-facing camera to extend one’s ability to turn animate objects into Anamojis. And the possibilities there seem, well, endless.
And the thing is, we simply do not know — and can not know — what the new AR platform may produce. When Apple introduced the App store in 2008, we could not imagine an Uber or an Airbnb or a million other apps that would soon be created and sold in the App Store. Similarly, now that Apple has introduced AR to the phone, we simply can not possibly imagine the uses developers may make of it. It’s like trying to imagine the unimaginable.
That’s the beauty of a platform. And Apple — and only Apple — is currently in a position to make that all happen.
This isn’t the end for the iPhone as the Professor contends. It may, in fact, be a new beginning. We’re about to see the start of a new wave of third-party developer creativity. And perhaps that wave will swell into a tidal wave of innovation.
13. Apple Glasses
I would argue that we are indeed standing on the summit of peak “phone as hardware”: While Apple’s newest iPhone offers some impressive hardware features, it does not represent the beginning of the next 10 years of the smartphone, as Apple claims.
Sheets of glass are simply no longer the most fertile ground for innovation.
While the Professor bemoans the fact that Apple is a mere hardware shop, he ignores the fact that there is no one better positioned to move from phones to glasses than is Apple. And as the following two articles attest, Apple may well be preparing to make that exact move.
Apple to Ramp Up Work on Augmented Reality Headset ~ Mark Gurman, November 8, 2017
Apple, Inc.’s Augmented Reality Glasses Could Be Closer Than You Think ~ Evan Niu, CFA, The Motley Fool
The Professor contends that iPhones are going to be mooted by the next “vector of innovation” and Apple is going to suffer for it. But for all we know, Apple glasses may be the next “vector of innovation” and Apple may own it in the same way they currently own the smartphone revolution.
Turns out that “sheets of glass” may still be fertile ground for innovation. And far from being relegated to a maker of legacy products, as the Professor contends, Apple may well be on the verge of a second renaissance.
When it comes to the demise of the iPhone, we can but recall Mark Twain’s reaction upon reading his own, somewhat premature, obituary.
This week’s Tech.pinions podcast features Jan Dawson and Bob O’Donnell discussing the challenges facing Snap, multiple developments in the semiconductor industry, including the Intel/AMD collaboration, the introduction of ARM-based server chips from Qualcomm, and the potential purchase of Qualcomm by Broadcom, and finally a discussion of Twitter’s 280-character tweets.
The Mixed Week of Twitter: 280 characters and the Blue Checkmark
This was a pretty busy week for Twitter and not always for the right reasons. First, the social platform turned the trial run that gave few users 280 characters for the tweets into the new characters limit norm. The exceptions are Twitter users tweeting in Japanese, Korean, and Chinese who remain at the 140 characters limit for now.
One of the more important underlying trends in the technology industry is one that most people are not following very closely. It has to do with the massive consolidation of the semiconductor industry. For years I have been writing about how this consolidation is in inevitable. It is being driven by the massive expansion of technology into industries that were not technology driven before. Things like healthcare, industrial, all forms of consumer products, and many other industry verticals. These companies, who know all of a sudden have to become technology companies, do not know how to manage a complicated supply chain around components and sensors all compromising of semiconductor industry goods. These companies are more interested in going to one vendor to get the brains they need to put in their products and don’t want to manage many vendors as was the norm. So it makes sense that if a company chooses Qualcomm or Intel, for example, for their main brain that they also hope to get the other sensors and chipsets they need from Intel or Qualcomm as well. This subtle, yet very powerful, nuance is driving the massive consolidation we see in the semiconductor industry.
This week’s Snap Inc earnings call was an indictment of the strategy pursued by the company in regard to both its core Snapchat app and its Spectacles hardware. The company has failed to drive two of the three major metrics that are key to success in the space, and it reversed its long-standing strategic stances on several key topics during a single earnings call. Having resisted calls for change for months, it appears Snap is now trying to change everything at once.
The Multiplier Key to Ad-Based App Growth
There’s a simple formula which is key to growing revenues for any ad-based online or app company:
user growth x engagement growth x rising ad prices
That’s the formula that’s served Facebook so enormously well over the last few years, and it’s also the one which Twitter and others have failed to implement effectively, with Snapchat seemingly the latest company to do so. Snapchat is executing on just one of these adequately – growing engagement and time spent – though it doesn’t consistently report the metrics needed to measure that progress over time. We do know that at the time Snap filed for its IPO, its users spent an average of 25-30 minutes per day in its app, and that number has grown since, but we don’t have any sense of the longer-term trajectory here.
On the other two metrics – user growth and rising ad prices – Snap has fallen woefully short. The user growth it saw in early 2016 was clearly something of a temporary phenomenon rather than a reliable predictor of future growth rates, though the company argued otherwise in its S-1 and clearly expected stronger growth than it’s actually seen since its IPO. Instead, what we’ve got is Twitter-like incrementalism rather than the strong growth that should characterize a social app in its prime:
The line across the chart hits at 10 million users per quarter, a milestone the company beat three times in late 2015 and early 2016, but hasn’t crossed again since, with the most recent quarter at just half that pace. That kind of user growth clearly isn’t going to get Snapchat where it needs to be from a revenue perspective.
Turning to prices per ad, we have no way to measure those directly, but it’s abundantly clear that they’ve been falling rather than rising as Snapchat has introduced programmatic ad buying. That change has lowered the entry point for advertising on Snapchat by three orders of magnitude, per management’s earnings call commentary, and demonstrated in the process that Snapchat’s earlier fixed rate cards were priced vastly above what supply and demand would have dictated. Now that advertisers have a choice in the matter, they’re paying vastly less, largely because there’s so little competition to fill those ad slots.
As such, instead of rapid user growth and rising engagement being multiplied by rising ad prices, Snap has seen revenues grow anemically, with average revenue per user in the US very low at around $2, and outside the US vanishingly small at under 50 cents in Europe and just 30 cents in the rest of the world. That won’t turn Snap into the kind of advertising powerhouse it clearly wants to be, and something needs to change.
The Dam Breaks
All of this has been fairly clear to those of us watching the company with keen eyes from the outside for some time – indeed, I pointed out the terrible timing of Snap’s IPO back in February, citing already apparent slow user growth. And yet within Snap there’s been a resistance to change, which I can only assume has its roots in CEO Evan Spiegel’s conviction that he’s a product genius. He’s been reported to resist data-based approaches to product management, instead favoring his own instincts, which have undeniably created a phenomenon but don’t seem to be serving Snap as well recently.
Now, it appears that the dam has finally broken, and rather than subtly embracing some of the changes that have been called for, Snap appears to be doing a 180 on almost every aspect of its strategy:
Design – Snapchat has long been criticized for its unfriendly UI, while Snap’s management has defended it as part of its unique value proposition and argued that those who don’t get it aren’t its target users. Now, it appears poised for a major redesign, likely along the lines described here.
Creators and celebrities – Snapchat has famously eschewed the courtship of creators and celebrities common to nearly every other big social platform, arguing that its organic tools were enough to attract and retain them. There’s been evidence for some time that this attitude was leading it to lose share among these groups (and their followers) to Instagram and other platforms. Now it appears that it will belatedly embrace them, though I’d argue it may well be too late for that.
Curation/AI – Snapchat has also resisted the pull of AI-based curation of content, instead serving up a relatively small but relatively universal set of content to users in particular geographies, as a result of which it’s lacked the diversity of content and personalization found in other apps. It now appears ready to embrace this strategy too, though it acknowledges the risks inherent in moving away from the relatively confined form of content sharing it’s focused on to date.
Pursuing other demographics – Snapchat had for a long time prioritized its iOS apps because that’s where it saw the highest user engagement, something it acknowledged in its S-1 filing. It has belatedly embraced Android, and after months of small tweaks to its Android app is now apparently working on an overhauled version that should fix some of the issues it’s had until now. It’s also trying to broaden the scope of its audience beyond the teenaged and young adult audiences it’s dominated in the past, but that also brings inherent risks as it invites the parents of its current users into what’s felt like an intimate and separate space.
Spectacles – Snapchat launched its first hardware product late last year, seemingly out of nowhere, with a clever marketing strategy that focused on artificial scarcity. However, management seemed to completely misinterpret the early sales of the device and ordered way too many units, leading to a write-down of inventory that’s twice the size of its revenue from Spectacles in its first three quarters on sale. This feels like yet another example of gut winning out over data.
The big question is whether any or all of this will actually help to turn around the two metrics where Snap is currently failing, without damaging the one metric – engagement – where it’s been succeeding. I’m certainly skeptical, especially in the context of ongoing inroads by Instagram and others into Snapchat’s once unique mindshare among younger people. The biggest risk is that Snapchat damages the current user experience as it pursues these new features and the users they’re designed to attract, eroding its core strategic advantage: its penetration of its core demographic.
Raising kids in the Generation Z demographic is an eye-opening experience on many fronts. The biggest being their approach to technology. While it is true computers, and the smartphone in particular, is completely changing the paradigm for learning, and entertaining for this demographic the one area that has peaked my interest most has been their new paradigm for communication.
The iPhone X has now been in the hands of reviewers for just over a week and in the hands of real-life customers for a bit less than that. A lot is different, but the focus of many early reviews was Face ID, the most significant change of all. In a way, Face ID is the “mother of all changes” for iPhone X. Because Face ID replaced Touch ID, the UI on iPhone X has been redesigned with new gestures that let you navigate the content on your phone effortlessly although those gestures are still somewhat foreign to users.
Some of the reviewers tried to spoof the iPhone X using hats, scarves, sunglasses even masks and twins with some of the commentary turning negative if Face ID failed to work under those circumstances.
I am as blind as a bat, and I usually wear contact lenses, but in the evening or when I travel I do wear glasses. Like many women, I also wear makeup and wear my hair in different ways. Looking different is a real life thing for me so figuring out if Face ID worked in all of those occasions was not just an interesting experiment, it was a necessity. Face ID worked better than I expected. My expectation was what it would fail in a similar way that Windows Hello had been doing on my PCs before I could train it every time I was wearing glasses by having the camera re-scan my face. But Face ID needed no training.
If you watched the excellent video that The Verge posted, you see the amount of technology that is involved in making an identification every time you engage with Face ID to unlock your iPhone X.
There was one time when Face ID did not work for me. It was the morning after getting the iPhone X when I woke up in a dark bedroom and like always do I reached out and grabbed the phone that was sitting on the bedside table. The room was dark, I had no makeup on, my hair was a mess, and I was squinting adjusting to the brightness of the screen. Face ID did not authenticate me and why should it?
Husband joke aside, why would we expect something different from Apple’s technology that we do from any other ID service? Look at what most governments require as a suitable picture for a passport:
Your head must face the camera directly with full face in view.
You must have a neutral facial expression or a natural smile, with both eyes open.
Taken in clothing normally worn on a daily basis
You cannot wear glasses.
You cannot wear a hat or head covering.
You cannot wear headphones or wireless hands-free devices.
Face ID works by taking a mathematical model of your face and checking it against the original scan of your face that you registered for Face ID on your new iPhone X. Thanks to the TrueDepth camera this is not just a flat image but a depth mapping of the face and all the features that make it up. It also uses Attention Awareness, which means it uses your eyes too so that if you are not looking at the screen, your iPhone X stays locked.
When you understand how the technology works you know why Face ID should not unlock your phone if you have a scarf covering half of your face or glasses that prevent Face ID from seeing your eyes, or even if you try to use it when you first wake up in the morning, and your face is half buried in a pillow. If you think about it, this is not very different from how Touch ID works. If your finger was wet or too cold or you had a cut, Touch ID would not authenticate because your fingerprint would be different from the one you registered when you set it up.
We also know that for some changes, you can train Face ID. Say, for instance, that you start wearing glasses or you start growing a beard, or you change your haircut, for all those things you can train Face ID. As every time you enter your pin code after Face ID does not unlock your phone, you are telling the neural networks that it was indeed you who just tried to use Face ID to unlock your phone. This explains why twins who share their passwords might be able to open the other sibling’s phone. This way of training compared to Windows Hello, where I need to go into settings and rescan my face, also underlines the machine learning aspect of Face ID.
So, while Face ID’s refusal to let us into our own phone might be an inconvenience, it is absolutely how I would want it to work to feel confident that my phone remains secure.
Building Trust for the Future
Building trust around Face ID is paramount for Apple. First, because they took away Touch ID when nobody asked. We were all very happy about how Touch ID worked and more importantly we all knew and trusted it was secure. Second, gaining trust now is vital to building a foundation for the future.
It is reasonable to think that, as the technology matures and costs are coming down, Face ID could trickle down in the iPhone portfolio. Maybe not all the way, as for now, at least, some people are just not comfortable with it. It is also natural to expect Face ID to be added to the iPad Pro so it would gain more screen real-estate without making the overall device larger. I also see some interesting use cases with Attention Awareness when it comes to page scrolling or app switching that Apple might want to explore on iPad. The same can be said, of course, about the Mac where I personally would love to be able to use Face ID to log in and bring up a specific setup or user. I know that Apple will always prefer if we all had individual devices but some larger and more expensive devices are shared in the home and Face ID could become not just the way into the device but also a personalization tool.
I also do wonder about use cases in the home where Face ID and voice could work together to provide added layers of security to access certain things from the front door to content on your TV. A long way away? Maybe. But what really matters is that whenever the technology is ready for us, we are ready to embrace it because we trust it and this is precisely what Apple is working on right now with Face ID on the iPhone X.
Last week Google, Twitter and Facebook’s lawyers and executives faced Senate and House Hearings about the Russian Interference in our last election. Legislators put these companies through the ringer as they showed actual ads bought but Russian operatives that represented false news stories that intended to influence and sway the last presidential election. These legislators asked hard questions and wanted real answers from these social media representatives about how they will go about making sure this does not happen again in the upcoming 2018 mid-term elections.
While these company representatives said they were on top of the problem and shared how they were going to tweak their algorithms and take other measures to try and catch these false ads before they even make it to their sites, I got the impression from the hearing’s that these lawmakers were not convinced Google, Facebook and Twitter really had a handle on this and could deliver. Even worse for them, Sen Diane Feinstein scolded them for not catching this sooner and said that she does not believe that they understand the damage they have done to America’s democratic process. Also, while these companies may have felt they passed the test from these hearings, I think they will come under even greater governmental scrutiny in the next year and am not ruling out that they may all be deemed a media company and come under some regulations before the next election.
The day after the hearings, Apple CEO Tim Cook tweeted out that the issues are not just the Russian Interference but also the problem of Fake News in general. More specifically, the fake news created by normal citizens to push their personal beliefs or agenda or fake news based even on something interesting that someone might want to share with their friends.
In an interview with NBC He stated “”I don’t believe that the big issue is ads from a foreign government. I believe that’s like .1 percent of the issue,” Cook told NBC Nightly News anchor Lester Holt in an exclusive interview that aired Wednesday night.The bigger issue is that some of these tools are used to divide people, to manipulate people, to get fake news to people in broad numbers, and so, to influence their thinking,” Cook said. “And this, to me, is the No. 1 through 10 issue.”
I myself got caught up on sharing a fake news story during California’s heavy rains earlier this year. Someone posted a picture of that massive bridge collapse near Big Sur and the picture they showed was interesting and disturbing and I thought it was newsworthy. However, a really good friend of mine saw what I posted and quickly corrected me by saying that this was a picture of a bridge closure at a different area from 3 years back. I immediacy deleted my Facebook post and then sought out the real story about what was a serious road closure but the picture was completely different and much less dramatic.
On another occasion, I posted a political story that I thought was worth discussing but quickly found out it was a fake story and pulled it down immediately. However, I was duped as I suspect many people are, especially if the story or post comes from someone they know and/or respect. What troubled me once this happened to me twice was how easy it is for someone to share a fake news story and it being spread by a person they know and trust.
Remember that parlor game Gossip? I played it often when I was young. The idea is to whisper something to a person in a circle and see if the original message would be the same when it is shared by the last person in the circle. Very seldom was the original message the same once it got around the circle and shared by the last person in the link. This is not to say that social media is necessarily a gossip machine, but it’s clear it has become more of a gossip medium than just a pure vehicle for people to share there life stories and interests with their friends.
The other issue about fake news is the fact that there are image tools that make it easy to create fake images and tie them to a story. The one most used are Adobe’s Photoshop. I once took a serious photography class and was taught how to use Photoshop to alter, in this case, my photo. I have to admit that when I was done, I looked younger and much thinner and was tempted to save that image. However it was a fake image, and although I liked what I saw, I deleted it.
More than once I have fallen for a fake picture that used photoshop and been tempted to share it with friends. But given what happened with that fake bridge photo I mentioned above, I now take extra measures to check out my stories source before ever posting anything on social media.
While I do think that Facebook, Google, and Twitter will find ways to flush out false political ads over time, I am less convinced that they will ever be able to stop fake news. This is especially difficult to do if it is created by ordinary people for whatever reason and then shared by trusted friends who get duped based on any form of personal interest or persuasions.
The advice that young Dustin Hoffman received was right. The future really is in plastics.
Of course, Hoffman’s character in the 1967 classic film “The Graduate” was being advised on a much more general-purpose form and application of plastics, but it turns out the statement is equally relevant today in the tech industry.
Some of the most fascinating work in the development of new tech-based products is happening on “plastic-like,” clear, flexible materials. In fact, at an Innovation Day put on by NextFlex—a consortium of government, academic, and private companies working to advance and standardize developments in flexible electronics manufacturing—I got a chance to see numerous efforts to bring flexible hybrid electronics (FHE) into the mainstream.
Most of the developments on display were related to flexible replacements for the printed circuit boards that sit at the heart of today’s tech devices. While we don’t always think about it, the rigid form today’s circuit boards take have a dramatic impact on the shape, design, and form factor of the devices they power. With the advent of pliable boards, the possibilities for completely new types of applications and devices become enticingly real across a wide range of industries. In fact, the early experiments with flexible electronics stretch from consumer devices to medical components to commercial systems and military applications.
At the event, GE showed off wireless wearable EKGs that dramatically ease the often-challenging process of connecting multiple wired leads to the patches placed on a patient’s body for traditional EKGs. Contract manufacturer Flex and chemical giant DuPont presented printed electronics designed for clothing that enable things like fabric with built-in warmers. They also showed off athletic clothing and racing suits with integrated biometric sensors for more advanced wearable computing and health-monitoring applications. Lockheed Martin showed sensors that attach to the curved wings of unmanned military drones. Universities like Stanford, Purdue, and Georgia Tech also displayed their research work in areas such as smart bandages, and self-powered wearable electronics. There was even a group of high-school students talking about a Shark Tank-style event they competed in where they had to create potential applications for FHE.
Long-time tech industry observers might argue that this is nothing new. After all, weren’t people talking about “printing” electronics onto these types of materials just a few years back? The expectation was that you’d be able to use sophisticated inkjet print-heads and specialized inks to crank out roll-upon-roll of complicated circuits in a simple, fast, cost-effective way.
Truth is, the industry tried to make a go of it, but a number of technical and financial realities quickly sidelined those efforts—likely forever. As former tech analyst and current NextFlex Director of Commercialization, Paul Semenza, put it, the cost of fabricating a single transistor is nearly zero with the current, highly efficient silicon manufacturing processes, so any effort to improve on that would be futile.
Despite these initial setbacks, all was not lost though. As with many big picture ideas, some concepts embedded within the overall printed electronics idea did prove to be useful. Specifically, the ability to print the lines, traces, and other interconnect elements typically found on circuit boards—arguably the simplest part of the original idea—turned out to be a good, practical application of the concept and original technology. Conductive inks printed onto various plastic films are perfectly suited to that part of a circuit board.
The integrated circuits (ICs) that power these boards couldn’t effectively be printed, however, because of the cost issues mentioned earlier. So, did it take a sophisticated new breakthrough to overcome the IC challenge? Turns out, the solution to the issue is surprisingly straightforward—essentially, removing the vast majority of the packaging around the chip itself.
The tiny silicon components inside most chips can be rendered thin and fairly pliable—it’s been the harder, thicker packaging around the chips that has prevented them from being used to create flexible electronics. By taking those packaging elements away and applying some clever engineering to protect and attach the resulting “raw” silicon components onto the plastic substrate used as the base for the electronics, you can build completely flexible circuit boards. These boards combine elements of both flexible materials and traditional semiconductors—hence, flexible hybrid electronics, or FHE.
One challenge, of course, is that the more complex the IC—such as an Intel CPU—the less flexible it is, and the more challenging it is to use on a flexible circuit board. While it’s easy to assume that this is because of the size of the chip, it’s actually due to the complexity of interconnections required, and not its physical size. That’s why early efforts for FHE are focused on simpler chips, such as those used on the popular Arduino board. Arduinos are used by some device manufacturers and electronics hobbyists/makers around the world for an enormous range of different products and projects.
The notion of flexibility and the potential use of plastics isn’t limited to the circuit boards found inside tech products either. Manufacturers of display components, in particular, have been exploring and experimenting with plastic materials and flexible displays for well over a decade. While they aren’t technically a type of FHE, LCD and OLED panels do incorporate some basic circuits on them to control what is shown on the display. Already, companies like LG and Samsung are using plastic substrates, or backplanes, for a number of different commercial products, including the curved OLED screens on Samsung’s Galaxy phones, the iPhone X, smart watches, and other products. In addition to flexibility, one of the key benefits of using plastic substrates in a display is the reduced potential for breakage that commonly occurs with glass-based displays.
The next expected development in displays moves beyond simple flexing and into folding. Several companies, in fact, have already shown prototypes of devices like smartphones that you can fold in half to reduce a large-screen device into something that will fit in your pocket. The challenge for these kinds of displays—as it is for the flexible circuit boards—is building them reliably and cost-effectively in large quantities. There’s an enormous difference between being able to build a handful of prototypes and cranking out millions of foldable displays and circuit boards.
Thankfully, key advances in material sciences, manufacturing equipment, manufacturing processes, and more are starting to come together in a meaningful way, enabling the start of the flexible hybrid electronics era. While products built with these key component technologies won’t be showing up overnight, their practical applications are clearly coming into sight. Once they do, the possibilities for what tech devices can do, how they work, and what form they take are nearly limitless. In fact, you can even start to imagine a world populated by more organic types of computing devices.